Test Report: KVM_Linux_containerd 15642

                    
                      4cf467cecc4d49355139c24bc1420f3978a367dd:2023-01-14:27426
                    
                

Test fail (3/297)

Order failed test Duration
203 TestPreload 192.53
209 TestRunningBinaryUpgrade 1730.14
218 TestStoppedBinaryUpgrade/Upgrade 254.03
x
+
TestPreload (192.53s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-105443 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.24.4
E0114 10:56:36.137069   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/ingress-addon-legacy-102444/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-105443 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.24.4: (1m59.260415559s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 ssh -p test-preload-105443 -- sudo crictl pull gcr.io/k8s-minikube/busybox
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 ssh -p test-preload-105443 -- sudo crictl pull gcr.io/k8s-minikube/busybox: (1.75707615s)
preload_test.go:67: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-105443 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.24.6
E0114 10:57:31.430526   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/addons-100659/client.crt: no such file or directory
preload_test.go:67: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-105443 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.24.6: (1m8.313546924s)
preload_test.go:76: (dbg) Run:  out/minikube-linux-amd64 ssh -p test-preload-105443 -- sudo crictl image ls
preload_test.go:81: Expected to find gcr.io/k8s-minikube/busybox in output of `docker images`, instead got 
-- stdout --
	IMAGE                                     TAG                  IMAGE ID            SIZE
	docker.io/kindest/kindnetd                v20220726-ed811e41   d921cee849482       25.8MB
	gcr.io/k8s-minikube/storage-provisioner   v5                   6e38f40d628db       9.06MB
	k8s.gcr.io/coredns/coredns                v1.8.6               a4ca41631cc7a       13.6MB
	k8s.gcr.io/etcd                           3.5.3-0              aebe758cef4cd       102MB
	k8s.gcr.io/kube-apiserver                 v1.24.6              860f263331c95       33.8MB
	k8s.gcr.io/kube-controller-manager        v1.24.6              c6c20157a4233       31MB
	k8s.gcr.io/kube-proxy                     v1.24.6              0bb39497ab33b       39.5MB
	k8s.gcr.io/kube-scheduler                 v1.24.6              c786c777a4e1c       15.5MB
	k8s.gcr.io/pause                          3.7                  221177c6082a8       311kB

                                                
                                                
-- /stdout --
panic.go:522: *** TestPreload FAILED at 2023-01-14 10:57:53.338544316 +0000 UTC m=+3108.203844445
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-105443 -n test-preload-105443
helpers_test.go:244: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-105443 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p test-preload-105443 logs -n 25: (1.054372883s)
helpers_test.go:252: TestPreload logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| cp      | multinode-103159 cp multinode-103159-m03:/home/docker/cp-test.txt                       | multinode-103159     | jenkins | v1.28.0 | 14 Jan 23 10:36 UTC | 14 Jan 23 10:36 UTC |
	|         | multinode-103159:/home/docker/cp-test_multinode-103159-m03_multinode-103159.txt         |                      |         |         |                     |                     |
	| ssh     | multinode-103159 ssh -n                                                                 | multinode-103159     | jenkins | v1.28.0 | 14 Jan 23 10:36 UTC | 14 Jan 23 10:36 UTC |
	|         | multinode-103159-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-103159 ssh -n multinode-103159 sudo cat                                       | multinode-103159     | jenkins | v1.28.0 | 14 Jan 23 10:36 UTC | 14 Jan 23 10:36 UTC |
	|         | /home/docker/cp-test_multinode-103159-m03_multinode-103159.txt                          |                      |         |         |                     |                     |
	| cp      | multinode-103159 cp multinode-103159-m03:/home/docker/cp-test.txt                       | multinode-103159     | jenkins | v1.28.0 | 14 Jan 23 10:36 UTC | 14 Jan 23 10:36 UTC |
	|         | multinode-103159-m02:/home/docker/cp-test_multinode-103159-m03_multinode-103159-m02.txt |                      |         |         |                     |                     |
	| ssh     | multinode-103159 ssh -n                                                                 | multinode-103159     | jenkins | v1.28.0 | 14 Jan 23 10:36 UTC | 14 Jan 23 10:36 UTC |
	|         | multinode-103159-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-103159 ssh -n multinode-103159-m02 sudo cat                                   | multinode-103159     | jenkins | v1.28.0 | 14 Jan 23 10:36 UTC | 14 Jan 23 10:36 UTC |
	|         | /home/docker/cp-test_multinode-103159-m03_multinode-103159-m02.txt                      |                      |         |         |                     |                     |
	| node    | multinode-103159 node stop m03                                                          | multinode-103159     | jenkins | v1.28.0 | 14 Jan 23 10:36 UTC | 14 Jan 23 10:36 UTC |
	| node    | multinode-103159 node start                                                             | multinode-103159     | jenkins | v1.28.0 | 14 Jan 23 10:36 UTC | 14 Jan 23 10:37 UTC |
	|         | m03 --alsologtostderr                                                                   |                      |         |         |                     |                     |
	| node    | list -p multinode-103159                                                                | multinode-103159     | jenkins | v1.28.0 | 14 Jan 23 10:37 UTC |                     |
	| stop    | -p multinode-103159                                                                     | multinode-103159     | jenkins | v1.28.0 | 14 Jan 23 10:37 UTC | 14 Jan 23 10:40 UTC |
	| start   | -p multinode-103159                                                                     | multinode-103159     | jenkins | v1.28.0 | 14 Jan 23 10:40 UTC | 14 Jan 23 10:46 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	| node    | list -p multinode-103159                                                                | multinode-103159     | jenkins | v1.28.0 | 14 Jan 23 10:46 UTC |                     |
	| node    | multinode-103159 node delete                                                            | multinode-103159     | jenkins | v1.28.0 | 14 Jan 23 10:46 UTC | 14 Jan 23 10:46 UTC |
	|         | m03                                                                                     |                      |         |         |                     |                     |
	| stop    | multinode-103159 stop                                                                   | multinode-103159     | jenkins | v1.28.0 | 14 Jan 23 10:46 UTC | 14 Jan 23 10:49 UTC |
	| start   | -p multinode-103159                                                                     | multinode-103159     | jenkins | v1.28.0 | 14 Jan 23 10:49 UTC | 14 Jan 23 10:53 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=containerd                                                          |                      |         |         |                     |                     |
	| node    | list -p multinode-103159                                                                | multinode-103159     | jenkins | v1.28.0 | 14 Jan 23 10:53 UTC |                     |
	| start   | -p multinode-103159-m02                                                                 | multinode-103159-m02 | jenkins | v1.28.0 | 14 Jan 23 10:53 UTC |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=containerd                                                          |                      |         |         |                     |                     |
	| start   | -p multinode-103159-m03                                                                 | multinode-103159-m03 | jenkins | v1.28.0 | 14 Jan 23 10:53 UTC | 14 Jan 23 10:54 UTC |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=containerd                                                          |                      |         |         |                     |                     |
	| node    | add -p multinode-103159                                                                 | multinode-103159     | jenkins | v1.28.0 | 14 Jan 23 10:54 UTC |                     |
	| delete  | -p multinode-103159-m03                                                                 | multinode-103159-m03 | jenkins | v1.28.0 | 14 Jan 23 10:54 UTC | 14 Jan 23 10:54 UTC |
	| delete  | -p multinode-103159                                                                     | multinode-103159     | jenkins | v1.28.0 | 14 Jan 23 10:54 UTC | 14 Jan 23 10:54 UTC |
	| start   | -p test-preload-105443                                                                  | test-preload-105443  | jenkins | v1.28.0 | 14 Jan 23 10:54 UTC | 14 Jan 23 10:56 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                                                           |                      |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=containerd                                                          |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.4                                                            |                      |         |         |                     |                     |
	| ssh     | -p test-preload-105443                                                                  | test-preload-105443  | jenkins | v1.28.0 | 14 Jan 23 10:56 UTC | 14 Jan 23 10:56 UTC |
	|         | -- sudo crictl pull                                                                     |                      |         |         |                     |                     |
	|         | gcr.io/k8s-minikube/busybox                                                             |                      |         |         |                     |                     |
	| start   | -p test-preload-105443                                                                  | test-preload-105443  | jenkins | v1.28.0 | 14 Jan 23 10:56 UTC | 14 Jan 23 10:57 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                  |                      |         |         |                     |                     |
	|         | --wait=true --driver=kvm2                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=containerd                                                          |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.6                                                            |                      |         |         |                     |                     |
	| ssh     | -p test-preload-105443 -- sudo                                                          | test-preload-105443  | jenkins | v1.28.0 | 14 Jan 23 10:57 UTC | 14 Jan 23 10:57 UTC |
	|         | crictl image ls                                                                         |                      |         |         |                     |                     |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/01/14 10:56:44
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.19.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0114 10:56:44.841829   27483 out.go:296] Setting OutFile to fd 1 ...
	I0114 10:56:44.841990   27483 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0114 10:56:44.842004   27483 out.go:309] Setting ErrFile to fd 2...
	I0114 10:56:44.842011   27483 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0114 10:56:44.842150   27483 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15642-7076/.minikube/bin
	I0114 10:56:44.842714   27483 out.go:303] Setting JSON to false
	I0114 10:56:44.843584   27483 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":5952,"bootTime":1673687853,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1027-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0114 10:56:44.843643   27483 start.go:135] virtualization: kvm guest
	I0114 10:56:44.845995   27483 out.go:177] * [test-preload-105443] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)
	I0114 10:56:44.847505   27483 out.go:177]   - MINIKUBE_LOCATION=15642
	I0114 10:56:44.847467   27483 notify.go:220] Checking for updates...
	I0114 10:56:44.849131   27483 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0114 10:56:44.850641   27483 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/15642-7076/kubeconfig
	I0114 10:56:44.852159   27483 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/15642-7076/.minikube
	I0114 10:56:44.853836   27483 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0114 10:56:44.855568   27483 config.go:180] Loaded profile config "test-preload-105443": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.24.4
	I0114 10:56:44.855925   27483 main.go:134] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0114 10:56:44.855969   27483 main.go:134] libmachine: Launching plugin server for driver kvm2
	I0114 10:56:44.871006   27483 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:36411
	I0114 10:56:44.871406   27483 main.go:134] libmachine: () Calling .GetVersion
	I0114 10:56:44.871928   27483 main.go:134] libmachine: Using API Version  1
	I0114 10:56:44.871948   27483 main.go:134] libmachine: () Calling .SetConfigRaw
	I0114 10:56:44.872276   27483 main.go:134] libmachine: () Calling .GetMachineName
	I0114 10:56:44.872460   27483 main.go:134] libmachine: (test-preload-105443) Calling .DriverName
	I0114 10:56:44.874302   27483 out.go:177] * Kubernetes 1.25.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.25.3
	I0114 10:56:44.875951   27483 driver.go:365] Setting default libvirt URI to qemu:///system
	I0114 10:56:44.876417   27483 main.go:134] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0114 10:56:44.876461   27483 main.go:134] libmachine: Launching plugin server for driver kvm2
	I0114 10:56:44.891718   27483 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:39941
	I0114 10:56:44.892038   27483 main.go:134] libmachine: () Calling .GetVersion
	I0114 10:56:44.892525   27483 main.go:134] libmachine: Using API Version  1
	I0114 10:56:44.892546   27483 main.go:134] libmachine: () Calling .SetConfigRaw
	I0114 10:56:44.892855   27483 main.go:134] libmachine: () Calling .GetMachineName
	I0114 10:56:44.893034   27483 main.go:134] libmachine: (test-preload-105443) Calling .DriverName
	I0114 10:56:44.928948   27483 out.go:177] * Using the kvm2 driver based on existing profile
	I0114 10:56:44.930397   27483 start.go:294] selected driver: kvm2
	I0114 10:56:44.930425   27483 start.go:838] validating driver "kvm2" against &{Name:test-preload-105443 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15235/minikube-v1.28.0-1668700269-15235-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-105443 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.172 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0114 10:56:44.930568   27483 start.go:849] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0114 10:56:44.931465   27483 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0114 10:56:44.931693   27483 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/15642-7076/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0114 10:56:44.947187   27483 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.28.0
	I0114 10:56:44.947505   27483 start_flags.go:917] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0114 10:56:44.947531   27483 cni.go:95] Creating CNI manager for ""
	I0114 10:56:44.947541   27483 cni.go:165] "kvm2" driver + containerd runtime found, recommending bridge
	I0114 10:56:44.947551   27483 start_flags.go:319] config:
	{Name:test-preload-105443 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15235/minikube-v1.28.0-1668700269-15235-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.6 ClusterName:test-preload-105443 Namespace:defaul
t APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.172 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144
MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0114 10:56:44.947647   27483 iso.go:125] acquiring lock: {Name:mk2d30b3fe95e944ec3a455ef50a6daa83b559c3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0114 10:56:44.949787   27483 out.go:177] * Starting control plane node test-preload-105443 in cluster test-preload-105443
	I0114 10:56:44.951384   27483 preload.go:132] Checking if preload exists for k8s version v1.24.6 and runtime containerd
	I0114 10:56:45.066501   27483 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.6/preloaded-images-k8s-v18-v1.24.6-containerd-overlay2-amd64.tar.lz4
	I0114 10:56:45.066524   27483 cache.go:57] Caching tarball of preloaded images
	I0114 10:56:45.066747   27483 preload.go:132] Checking if preload exists for k8s version v1.24.6 and runtime containerd
	I0114 10:56:45.069122   27483 out.go:177] * Downloading Kubernetes v1.24.6 preload ...
	I0114 10:56:45.070627   27483 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.24.6-containerd-overlay2-amd64.tar.lz4 ...
	I0114 10:56:45.187669   27483 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.6/preloaded-images-k8s-v18-v1.24.6-containerd-overlay2-amd64.tar.lz4?checksum=md5:0de094b674a9198bc47721c3b23603d5 -> /home/jenkins/minikube-integration/15642-7076/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.6-containerd-overlay2-amd64.tar.lz4
	I0114 10:57:02.624024   27483 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.24.6-containerd-overlay2-amd64.tar.lz4 ...
	I0114 10:57:02.624110   27483 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/15642-7076/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.6-containerd-overlay2-amd64.tar.lz4 ...
	I0114 10:57:03.493487   27483 cache.go:60] Finished verifying existence of preloaded tar for  v1.24.6 on containerd
	I0114 10:57:03.493622   27483 profile.go:148] Saving config to /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/test-preload-105443/config.json ...
	I0114 10:57:03.493815   27483 cache.go:193] Successfully downloaded all kic artifacts
	I0114 10:57:03.493843   27483 start.go:364] acquiring machines lock for test-preload-105443: {Name:mk0b2fd58874b04199a2e55d480667572854a1a5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0114 10:57:03.493937   27483 start.go:368] acquired machines lock for "test-preload-105443" in 77.451µs
	I0114 10:57:03.493953   27483 start.go:96] Skipping create...Using existing machine configuration
	I0114 10:57:03.493958   27483 fix.go:55] fixHost starting: 
	I0114 10:57:03.494229   27483 main.go:134] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0114 10:57:03.494268   27483 main.go:134] libmachine: Launching plugin server for driver kvm2
	I0114 10:57:03.509103   27483 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:34621
	I0114 10:57:03.509503   27483 main.go:134] libmachine: () Calling .GetVersion
	I0114 10:57:03.509956   27483 main.go:134] libmachine: Using API Version  1
	I0114 10:57:03.509972   27483 main.go:134] libmachine: () Calling .SetConfigRaw
	I0114 10:57:03.510346   27483 main.go:134] libmachine: () Calling .GetMachineName
	I0114 10:57:03.510559   27483 main.go:134] libmachine: (test-preload-105443) Calling .DriverName
	I0114 10:57:03.510711   27483 main.go:134] libmachine: (test-preload-105443) Calling .GetState
	I0114 10:57:03.512608   27483 fix.go:103] recreateIfNeeded on test-preload-105443: state=Running err=<nil>
	W0114 10:57:03.512626   27483 fix.go:129] unexpected machine state, will restart: <nil>
	I0114 10:57:03.515899   27483 out.go:177] * Updating the running kvm2 "test-preload-105443" VM ...
	I0114 10:57:03.517259   27483 machine.go:88] provisioning docker machine ...
	I0114 10:57:03.517287   27483 main.go:134] libmachine: (test-preload-105443) Calling .DriverName
	I0114 10:57:03.517498   27483 main.go:134] libmachine: (test-preload-105443) Calling .GetMachineName
	I0114 10:57:03.517653   27483 buildroot.go:166] provisioning hostname "test-preload-105443"
	I0114 10:57:03.517679   27483 main.go:134] libmachine: (test-preload-105443) Calling .GetMachineName
	I0114 10:57:03.517877   27483 main.go:134] libmachine: (test-preload-105443) Calling .GetSSHHostname
	I0114 10:57:03.520528   27483 main.go:134] libmachine: (test-preload-105443) DBG | domain test-preload-105443 has defined MAC address 52:54:00:41:6d:81 in network mk-test-preload-105443
	I0114 10:57:03.520966   27483 main.go:134] libmachine: (test-preload-105443) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:6d:81", ip: ""} in network mk-test-preload-105443: {Iface:virbr1 ExpiryTime:2023-01-14 11:54:58 +0000 UTC Type:0 Mac:52:54:00:41:6d:81 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:test-preload-105443 Clientid:01:52:54:00:41:6d:81}
	I0114 10:57:03.521003   27483 main.go:134] libmachine: (test-preload-105443) DBG | domain test-preload-105443 has defined IP address 192.168.39.172 and MAC address 52:54:00:41:6d:81 in network mk-test-preload-105443
	I0114 10:57:03.521153   27483 main.go:134] libmachine: (test-preload-105443) Calling .GetSSHPort
	I0114 10:57:03.521324   27483 main.go:134] libmachine: (test-preload-105443) Calling .GetSSHKeyPath
	I0114 10:57:03.521464   27483 main.go:134] libmachine: (test-preload-105443) Calling .GetSSHKeyPath
	I0114 10:57:03.521597   27483 main.go:134] libmachine: (test-preload-105443) Calling .GetSSHUsername
	I0114 10:57:03.521755   27483 main.go:134] libmachine: Using SSH client type: native
	I0114 10:57:03.521903   27483 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1fc0] 0x7f5140 <nil>  [] 0s} 192.168.39.172 22 <nil> <nil>}
	I0114 10:57:03.521917   27483 main.go:134] libmachine: About to run SSH command:
	sudo hostname test-preload-105443 && echo "test-preload-105443" | sudo tee /etc/hostname
	I0114 10:57:03.657055   27483 main.go:134] libmachine: SSH cmd err, output: <nil>: test-preload-105443
	
	I0114 10:57:03.657083   27483 main.go:134] libmachine: (test-preload-105443) Calling .GetSSHHostname
	I0114 10:57:03.659898   27483 main.go:134] libmachine: (test-preload-105443) DBG | domain test-preload-105443 has defined MAC address 52:54:00:41:6d:81 in network mk-test-preload-105443
	I0114 10:57:03.660230   27483 main.go:134] libmachine: (test-preload-105443) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:6d:81", ip: ""} in network mk-test-preload-105443: {Iface:virbr1 ExpiryTime:2023-01-14 11:54:58 +0000 UTC Type:0 Mac:52:54:00:41:6d:81 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:test-preload-105443 Clientid:01:52:54:00:41:6d:81}
	I0114 10:57:03.660260   27483 main.go:134] libmachine: (test-preload-105443) DBG | domain test-preload-105443 has defined IP address 192.168.39.172 and MAC address 52:54:00:41:6d:81 in network mk-test-preload-105443
	I0114 10:57:03.660430   27483 main.go:134] libmachine: (test-preload-105443) Calling .GetSSHPort
	I0114 10:57:03.660618   27483 main.go:134] libmachine: (test-preload-105443) Calling .GetSSHKeyPath
	I0114 10:57:03.660766   27483 main.go:134] libmachine: (test-preload-105443) Calling .GetSSHKeyPath
	I0114 10:57:03.660889   27483 main.go:134] libmachine: (test-preload-105443) Calling .GetSSHUsername
	I0114 10:57:03.661034   27483 main.go:134] libmachine: Using SSH client type: native
	I0114 10:57:03.661189   27483 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1fc0] 0x7f5140 <nil>  [] 0s} 192.168.39.172 22 <nil> <nil>}
	I0114 10:57:03.661209   27483 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-105443' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-105443/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-105443' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0114 10:57:03.779087   27483 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0114 10:57:03.779115   27483 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/15642-7076/.minikube CaCertPath:/home/jenkins/minikube-integration/15642-7076/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/15642-7076/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/15642-7076/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/15642-7076/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/15642-7076/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/15642-7076/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/15642-7076/.minikube}
	I0114 10:57:03.779137   27483 buildroot.go:174] setting up certificates
	I0114 10:57:03.779146   27483 provision.go:83] configureAuth start
	I0114 10:57:03.779160   27483 main.go:134] libmachine: (test-preload-105443) Calling .GetMachineName
	I0114 10:57:03.779387   27483 main.go:134] libmachine: (test-preload-105443) Calling .GetIP
	I0114 10:57:03.781939   27483 main.go:134] libmachine: (test-preload-105443) DBG | domain test-preload-105443 has defined MAC address 52:54:00:41:6d:81 in network mk-test-preload-105443
	I0114 10:57:03.782288   27483 main.go:134] libmachine: (test-preload-105443) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:6d:81", ip: ""} in network mk-test-preload-105443: {Iface:virbr1 ExpiryTime:2023-01-14 11:54:58 +0000 UTC Type:0 Mac:52:54:00:41:6d:81 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:test-preload-105443 Clientid:01:52:54:00:41:6d:81}
	I0114 10:57:03.782316   27483 main.go:134] libmachine: (test-preload-105443) DBG | domain test-preload-105443 has defined IP address 192.168.39.172 and MAC address 52:54:00:41:6d:81 in network mk-test-preload-105443
	I0114 10:57:03.782430   27483 main.go:134] libmachine: (test-preload-105443) Calling .GetSSHHostname
	I0114 10:57:03.784455   27483 main.go:134] libmachine: (test-preload-105443) DBG | domain test-preload-105443 has defined MAC address 52:54:00:41:6d:81 in network mk-test-preload-105443
	I0114 10:57:03.784750   27483 main.go:134] libmachine: (test-preload-105443) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:6d:81", ip: ""} in network mk-test-preload-105443: {Iface:virbr1 ExpiryTime:2023-01-14 11:54:58 +0000 UTC Type:0 Mac:52:54:00:41:6d:81 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:test-preload-105443 Clientid:01:52:54:00:41:6d:81}
	I0114 10:57:03.784786   27483 main.go:134] libmachine: (test-preload-105443) DBG | domain test-preload-105443 has defined IP address 192.168.39.172 and MAC address 52:54:00:41:6d:81 in network mk-test-preload-105443
	I0114 10:57:03.784881   27483 provision.go:138] copyHostCerts
	I0114 10:57:03.784922   27483 exec_runner.go:144] found /home/jenkins/minikube-integration/15642-7076/.minikube/ca.pem, removing ...
	I0114 10:57:03.784932   27483 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15642-7076/.minikube/ca.pem
	I0114 10:57:03.785006   27483 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15642-7076/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/15642-7076/.minikube/ca.pem (1078 bytes)
	I0114 10:57:03.785109   27483 exec_runner.go:144] found /home/jenkins/minikube-integration/15642-7076/.minikube/cert.pem, removing ...
	I0114 10:57:03.785120   27483 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15642-7076/.minikube/cert.pem
	I0114 10:57:03.785147   27483 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15642-7076/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/15642-7076/.minikube/cert.pem (1123 bytes)
	I0114 10:57:03.785195   27483 exec_runner.go:144] found /home/jenkins/minikube-integration/15642-7076/.minikube/key.pem, removing ...
	I0114 10:57:03.785202   27483 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15642-7076/.minikube/key.pem
	I0114 10:57:03.785224   27483 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15642-7076/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/15642-7076/.minikube/key.pem (1679 bytes)
	I0114 10:57:03.785270   27483 provision.go:112] generating server cert: /home/jenkins/minikube-integration/15642-7076/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/15642-7076/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/15642-7076/.minikube/certs/ca-key.pem org=jenkins.test-preload-105443 san=[192.168.39.172 192.168.39.172 localhost 127.0.0.1 minikube test-preload-105443]
	I0114 10:57:03.904735   27483 provision.go:172] copyRemoteCerts
	I0114 10:57:03.904794   27483 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0114 10:57:03.904814   27483 main.go:134] libmachine: (test-preload-105443) Calling .GetSSHHostname
	I0114 10:57:03.907354   27483 main.go:134] libmachine: (test-preload-105443) DBG | domain test-preload-105443 has defined MAC address 52:54:00:41:6d:81 in network mk-test-preload-105443
	I0114 10:57:03.907664   27483 main.go:134] libmachine: (test-preload-105443) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:6d:81", ip: ""} in network mk-test-preload-105443: {Iface:virbr1 ExpiryTime:2023-01-14 11:54:58 +0000 UTC Type:0 Mac:52:54:00:41:6d:81 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:test-preload-105443 Clientid:01:52:54:00:41:6d:81}
	I0114 10:57:03.907706   27483 main.go:134] libmachine: (test-preload-105443) DBG | domain test-preload-105443 has defined IP address 192.168.39.172 and MAC address 52:54:00:41:6d:81 in network mk-test-preload-105443
	I0114 10:57:03.907872   27483 main.go:134] libmachine: (test-preload-105443) Calling .GetSSHPort
	I0114 10:57:03.908036   27483 main.go:134] libmachine: (test-preload-105443) Calling .GetSSHKeyPath
	I0114 10:57:03.908221   27483 main.go:134] libmachine: (test-preload-105443) Calling .GetSSHUsername
	I0114 10:57:03.908378   27483 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15642-7076/.minikube/machines/test-preload-105443/id_rsa Username:docker}
	I0114 10:57:03.996081   27483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-7076/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0114 10:57:04.020384   27483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-7076/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0114 10:57:04.042430   27483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-7076/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0114 10:57:04.064424   27483 provision.go:86] duration metric: configureAuth took 285.2617ms
	I0114 10:57:04.064452   27483 buildroot.go:189] setting minikube options for container-runtime
	I0114 10:57:04.064606   27483 config.go:180] Loaded profile config "test-preload-105443": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.24.6
	I0114 10:57:04.064617   27483 machine.go:91] provisioned docker machine in 547.340706ms
	I0114 10:57:04.064622   27483 start.go:300] post-start starting for "test-preload-105443" (driver="kvm2")
	I0114 10:57:04.064628   27483 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0114 10:57:04.064653   27483 main.go:134] libmachine: (test-preload-105443) Calling .DriverName
	I0114 10:57:04.064923   27483 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0114 10:57:04.064952   27483 main.go:134] libmachine: (test-preload-105443) Calling .GetSSHHostname
	I0114 10:57:04.067356   27483 main.go:134] libmachine: (test-preload-105443) DBG | domain test-preload-105443 has defined MAC address 52:54:00:41:6d:81 in network mk-test-preload-105443
	I0114 10:57:04.067669   27483 main.go:134] libmachine: (test-preload-105443) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:6d:81", ip: ""} in network mk-test-preload-105443: {Iface:virbr1 ExpiryTime:2023-01-14 11:54:58 +0000 UTC Type:0 Mac:52:54:00:41:6d:81 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:test-preload-105443 Clientid:01:52:54:00:41:6d:81}
	I0114 10:57:04.067705   27483 main.go:134] libmachine: (test-preload-105443) DBG | domain test-preload-105443 has defined IP address 192.168.39.172 and MAC address 52:54:00:41:6d:81 in network mk-test-preload-105443
	I0114 10:57:04.067874   27483 main.go:134] libmachine: (test-preload-105443) Calling .GetSSHPort
	I0114 10:57:04.068068   27483 main.go:134] libmachine: (test-preload-105443) Calling .GetSSHKeyPath
	I0114 10:57:04.068195   27483 main.go:134] libmachine: (test-preload-105443) Calling .GetSSHUsername
	I0114 10:57:04.068353   27483 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15642-7076/.minikube/machines/test-preload-105443/id_rsa Username:docker}
	I0114 10:57:04.155889   27483 ssh_runner.go:195] Run: cat /etc/os-release
	I0114 10:57:04.160000   27483 info.go:137] Remote host: Buildroot 2021.02.12
	I0114 10:57:04.160026   27483 filesync.go:126] Scanning /home/jenkins/minikube-integration/15642-7076/.minikube/addons for local assets ...
	I0114 10:57:04.160107   27483 filesync.go:126] Scanning /home/jenkins/minikube-integration/15642-7076/.minikube/files for local assets ...
	I0114 10:57:04.160194   27483 filesync.go:149] local asset: /home/jenkins/minikube-integration/15642-7076/.minikube/files/etc/ssl/certs/139212.pem -> 139212.pem in /etc/ssl/certs
	I0114 10:57:04.160304   27483 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0114 10:57:04.169302   27483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-7076/.minikube/files/etc/ssl/certs/139212.pem --> /etc/ssl/certs/139212.pem (1708 bytes)
	I0114 10:57:04.191837   27483 start.go:303] post-start completed in 127.20128ms
	I0114 10:57:04.191871   27483 fix.go:57] fixHost completed within 697.911934ms
	I0114 10:57:04.191896   27483 main.go:134] libmachine: (test-preload-105443) Calling .GetSSHHostname
	I0114 10:57:04.194381   27483 main.go:134] libmachine: (test-preload-105443) DBG | domain test-preload-105443 has defined MAC address 52:54:00:41:6d:81 in network mk-test-preload-105443
	I0114 10:57:04.194670   27483 main.go:134] libmachine: (test-preload-105443) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:6d:81", ip: ""} in network mk-test-preload-105443: {Iface:virbr1 ExpiryTime:2023-01-14 11:54:58 +0000 UTC Type:0 Mac:52:54:00:41:6d:81 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:test-preload-105443 Clientid:01:52:54:00:41:6d:81}
	I0114 10:57:04.194703   27483 main.go:134] libmachine: (test-preload-105443) DBG | domain test-preload-105443 has defined IP address 192.168.39.172 and MAC address 52:54:00:41:6d:81 in network mk-test-preload-105443
	I0114 10:57:04.194903   27483 main.go:134] libmachine: (test-preload-105443) Calling .GetSSHPort
	I0114 10:57:04.195079   27483 main.go:134] libmachine: (test-preload-105443) Calling .GetSSHKeyPath
	I0114 10:57:04.195212   27483 main.go:134] libmachine: (test-preload-105443) Calling .GetSSHKeyPath
	I0114 10:57:04.195378   27483 main.go:134] libmachine: (test-preload-105443) Calling .GetSSHUsername
	I0114 10:57:04.195505   27483 main.go:134] libmachine: Using SSH client type: native
	I0114 10:57:04.195622   27483 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1fc0] 0x7f5140 <nil>  [] 0s} 192.168.39.172 22 <nil> <nil>}
	I0114 10:57:04.195632   27483 main.go:134] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0114 10:57:04.314929   27483 main.go:134] libmachine: SSH cmd err, output: <nil>: 1673693824.311811677
	
	I0114 10:57:04.314953   27483 fix.go:207] guest clock: 1673693824.311811677
	I0114 10:57:04.314960   27483 fix.go:220] Guest: 2023-01-14 10:57:04.311811677 +0000 UTC Remote: 2023-01-14 10:57:04.191876949 +0000 UTC m=+19.411693954 (delta=119.934728ms)
	I0114 10:57:04.314981   27483 fix.go:191] guest clock delta is within tolerance: 119.934728ms
	I0114 10:57:04.314987   27483 start.go:83] releasing machines lock for "test-preload-105443", held for 821.037649ms
	I0114 10:57:04.315032   27483 main.go:134] libmachine: (test-preload-105443) Calling .DriverName
	I0114 10:57:04.315315   27483 main.go:134] libmachine: (test-preload-105443) Calling .GetIP
	I0114 10:57:04.317727   27483 main.go:134] libmachine: (test-preload-105443) DBG | domain test-preload-105443 has defined MAC address 52:54:00:41:6d:81 in network mk-test-preload-105443
	I0114 10:57:04.318095   27483 main.go:134] libmachine: (test-preload-105443) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:6d:81", ip: ""} in network mk-test-preload-105443: {Iface:virbr1 ExpiryTime:2023-01-14 11:54:58 +0000 UTC Type:0 Mac:52:54:00:41:6d:81 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:test-preload-105443 Clientid:01:52:54:00:41:6d:81}
	I0114 10:57:04.318138   27483 main.go:134] libmachine: (test-preload-105443) DBG | domain test-preload-105443 has defined IP address 192.168.39.172 and MAC address 52:54:00:41:6d:81 in network mk-test-preload-105443
	I0114 10:57:04.318274   27483 main.go:134] libmachine: (test-preload-105443) Calling .DriverName
	I0114 10:57:04.318776   27483 main.go:134] libmachine: (test-preload-105443) Calling .DriverName
	I0114 10:57:04.318952   27483 main.go:134] libmachine: (test-preload-105443) Calling .DriverName
	I0114 10:57:04.319018   27483 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0114 10:57:04.319066   27483 main.go:134] libmachine: (test-preload-105443) Calling .GetSSHHostname
	I0114 10:57:04.319167   27483 ssh_runner.go:195] Run: cat /version.json
	I0114 10:57:04.319188   27483 main.go:134] libmachine: (test-preload-105443) Calling .GetSSHHostname
	I0114 10:57:04.321686   27483 main.go:134] libmachine: (test-preload-105443) DBG | domain test-preload-105443 has defined MAC address 52:54:00:41:6d:81 in network mk-test-preload-105443
	I0114 10:57:04.321717   27483 main.go:134] libmachine: (test-preload-105443) DBG | domain test-preload-105443 has defined MAC address 52:54:00:41:6d:81 in network mk-test-preload-105443
	I0114 10:57:04.321990   27483 main.go:134] libmachine: (test-preload-105443) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:6d:81", ip: ""} in network mk-test-preload-105443: {Iface:virbr1 ExpiryTime:2023-01-14 11:54:58 +0000 UTC Type:0 Mac:52:54:00:41:6d:81 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:test-preload-105443 Clientid:01:52:54:00:41:6d:81}
	I0114 10:57:04.322028   27483 main.go:134] libmachine: (test-preload-105443) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:6d:81", ip: ""} in network mk-test-preload-105443: {Iface:virbr1 ExpiryTime:2023-01-14 11:54:58 +0000 UTC Type:0 Mac:52:54:00:41:6d:81 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:test-preload-105443 Clientid:01:52:54:00:41:6d:81}
	I0114 10:57:04.322048   27483 main.go:134] libmachine: (test-preload-105443) DBG | domain test-preload-105443 has defined IP address 192.168.39.172 and MAC address 52:54:00:41:6d:81 in network mk-test-preload-105443
	I0114 10:57:04.322101   27483 main.go:134] libmachine: (test-preload-105443) DBG | domain test-preload-105443 has defined IP address 192.168.39.172 and MAC address 52:54:00:41:6d:81 in network mk-test-preload-105443
	I0114 10:57:04.322310   27483 main.go:134] libmachine: (test-preload-105443) Calling .GetSSHPort
	I0114 10:57:04.322402   27483 main.go:134] libmachine: (test-preload-105443) Calling .GetSSHPort
	I0114 10:57:04.322502   27483 main.go:134] libmachine: (test-preload-105443) Calling .GetSSHKeyPath
	I0114 10:57:04.322555   27483 main.go:134] libmachine: (test-preload-105443) Calling .GetSSHKeyPath
	I0114 10:57:04.322615   27483 main.go:134] libmachine: (test-preload-105443) Calling .GetSSHUsername
	I0114 10:57:04.322706   27483 main.go:134] libmachine: (test-preload-105443) Calling .GetSSHUsername
	I0114 10:57:04.322727   27483 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15642-7076/.minikube/machines/test-preload-105443/id_rsa Username:docker}
	I0114 10:57:04.322814   27483 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15642-7076/.minikube/machines/test-preload-105443/id_rsa Username:docker}
	I0114 10:57:04.417331   27483 ssh_runner.go:195] Run: systemctl --version
	I0114 10:57:04.424326   27483 preload.go:132] Checking if preload exists for k8s version v1.24.6 and runtime containerd
	I0114 10:57:04.424440   27483 ssh_runner.go:195] Run: sudo crictl images --output json
	I0114 10:57:04.454669   27483 containerd.go:549] couldn't find preloaded image for "k8s.gcr.io/kube-apiserver:v1.24.6". assuming images are not preloaded.
	I0114 10:57:04.454724   27483 ssh_runner.go:195] Run: which lz4
	I0114 10:57:04.459037   27483 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0114 10:57:04.463259   27483 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0114 10:57:04.463289   27483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-7076/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.6-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (458739102 bytes)
	I0114 10:57:06.662983   27483 containerd.go:496] Took 2.203974 seconds to copy over tarball
	I0114 10:57:06.663050   27483 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0114 10:57:10.021006   27483 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.35792013s)
	I0114 10:57:10.021040   27483 containerd.go:503] Took 3.358030 seconds t extract the tarball
	I0114 10:57:10.021054   27483 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0114 10:57:10.063775   27483 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0114 10:57:10.198644   27483 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0114 10:57:10.235539   27483 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0114 10:57:10.253591   27483 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0114 10:57:10.266454   27483 docker.go:189] disabling docker service ...
	I0114 10:57:10.266504   27483 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0114 10:57:10.282083   27483 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0114 10:57:10.297881   27483 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0114 10:57:10.440617   27483 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0114 10:57:10.602422   27483 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0114 10:57:10.618291   27483 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0114 10:57:10.636619   27483 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*sandbox_image = .*$|sandbox_image = "k8s.gcr.io/pause:3.7"|' -i /etc/containerd/config.toml"
	I0114 10:57:10.648420   27483 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*restrict_oom_score_adj = .*$|restrict_oom_score_adj = false|' -i /etc/containerd/config.toml"
	I0114 10:57:10.659142   27483 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*SystemdCgroup = .*$|SystemdCgroup = false|' -i /etc/containerd/config.toml"
	I0114 10:57:10.669259   27483 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*conf_dir = .*$|conf_dir = "/etc/cni/net.d"|' -i /etc/containerd/config.toml"
	I0114 10:57:10.679887   27483 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0114 10:57:10.689402   27483 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0114 10:57:10.699119   27483 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0114 10:57:10.833459   27483 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0114 10:57:11.085684   27483 start.go:451] Will wait 60s for socket path /run/containerd/containerd.sock
	I0114 10:57:11.085757   27483 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0114 10:57:11.109498   27483 retry.go:31] will retry after 1.104660288s: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I0114 10:57:12.215242   27483 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0114 10:57:12.220461   27483 retry.go:31] will retry after 2.160763633s: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I0114 10:57:14.382209   27483 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0114 10:57:14.387459   27483 start.go:472] Will wait 60s for crictl version
	I0114 10:57:14.387510   27483 ssh_runner.go:195] Run: which crictl
	I0114 10:57:14.391151   27483 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0114 10:57:14.420438   27483 start.go:488] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.6.9
	RuntimeApiVersion:  v1alpha2
	I0114 10:57:14.420496   27483 ssh_runner.go:195] Run: containerd --version
	I0114 10:57:14.452838   27483 ssh_runner.go:195] Run: containerd --version
	I0114 10:57:14.483693   27483 out.go:177] * Preparing Kubernetes v1.24.6 on containerd 1.6.9 ...
	I0114 10:57:14.485043   27483 main.go:134] libmachine: (test-preload-105443) Calling .GetIP
	I0114 10:57:14.487862   27483 main.go:134] libmachine: (test-preload-105443) DBG | domain test-preload-105443 has defined MAC address 52:54:00:41:6d:81 in network mk-test-preload-105443
	I0114 10:57:14.488196   27483 main.go:134] libmachine: (test-preload-105443) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:6d:81", ip: ""} in network mk-test-preload-105443: {Iface:virbr1 ExpiryTime:2023-01-14 11:54:58 +0000 UTC Type:0 Mac:52:54:00:41:6d:81 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:test-preload-105443 Clientid:01:52:54:00:41:6d:81}
	I0114 10:57:14.488228   27483 main.go:134] libmachine: (test-preload-105443) DBG | domain test-preload-105443 has defined IP address 192.168.39.172 and MAC address 52:54:00:41:6d:81 in network mk-test-preload-105443
	I0114 10:57:14.488412   27483 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0114 10:57:14.492727   27483 preload.go:132] Checking if preload exists for k8s version v1.24.6 and runtime containerd
	I0114 10:57:14.492793   27483 ssh_runner.go:195] Run: sudo crictl images --output json
	I0114 10:57:14.521168   27483 containerd.go:553] all images are preloaded for containerd runtime.
	I0114 10:57:14.521193   27483 containerd.go:467] Images already preloaded, skipping extraction
	I0114 10:57:14.521240   27483 ssh_runner.go:195] Run: sudo crictl images --output json
	I0114 10:57:14.550424   27483 containerd.go:553] all images are preloaded for containerd runtime.
	I0114 10:57:14.550449   27483 cache_images.go:84] Images are preloaded, skipping loading
	I0114 10:57:14.550501   27483 ssh_runner.go:195] Run: sudo crictl info
	I0114 10:57:14.604746   27483 cni.go:95] Creating CNI manager for ""
	I0114 10:57:14.604769   27483 cni.go:165] "kvm2" driver + containerd runtime found, recommending bridge
	I0114 10:57:14.604779   27483 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0114 10:57:14.604798   27483 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.172 APIServerPort:8443 KubernetesVersion:v1.24.6 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-105443 NodeName:test-preload-105443 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.172"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.172 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[]}
	I0114 10:57:14.604946   27483 kubeadm.go:163] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.172
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "test-preload-105443"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.172
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.172"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.6
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0114 10:57:14.605047   27483 kubeadm.go:962] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.6/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=test-preload-105443 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.172 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.6 ClusterName:test-preload-105443 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0114 10:57:14.605108   27483 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.6
	I0114 10:57:14.617185   27483 binaries.go:44] Found k8s binaries, skipping transfer
	I0114 10:57:14.617251   27483 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0114 10:57:14.628477   27483 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (514 bytes)
	I0114 10:57:14.650332   27483 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0114 10:57:14.676514   27483 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2054 bytes)
	I0114 10:57:14.705028   27483 ssh_runner.go:195] Run: grep 192.168.39.172	control-plane.minikube.internal$ /etc/hosts
	I0114 10:57:14.717550   27483 certs.go:54] Setting up /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/test-preload-105443 for IP: 192.168.39.172
	I0114 10:57:14.717670   27483 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/15642-7076/.minikube/ca.key
	I0114 10:57:14.717722   27483 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/15642-7076/.minikube/proxy-client-ca.key
	I0114 10:57:14.717812   27483 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/test-preload-105443/client.key
	I0114 10:57:14.717902   27483 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/test-preload-105443/apiserver.key.ee96354a
	I0114 10:57:14.717961   27483 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/test-preload-105443/proxy-client.key
	I0114 10:57:14.718097   27483 certs.go:388] found cert: /home/jenkins/minikube-integration/15642-7076/.minikube/certs/home/jenkins/minikube-integration/15642-7076/.minikube/certs/13921.pem (1338 bytes)
	W0114 10:57:14.718130   27483 certs.go:384] ignoring /home/jenkins/minikube-integration/15642-7076/.minikube/certs/home/jenkins/minikube-integration/15642-7076/.minikube/certs/13921_empty.pem, impossibly tiny 0 bytes
	I0114 10:57:14.718143   27483 certs.go:388] found cert: /home/jenkins/minikube-integration/15642-7076/.minikube/certs/home/jenkins/minikube-integration/15642-7076/.minikube/certs/ca-key.pem (1675 bytes)
	I0114 10:57:14.718177   27483 certs.go:388] found cert: /home/jenkins/minikube-integration/15642-7076/.minikube/certs/home/jenkins/minikube-integration/15642-7076/.minikube/certs/ca.pem (1078 bytes)
	I0114 10:57:14.718210   27483 certs.go:388] found cert: /home/jenkins/minikube-integration/15642-7076/.minikube/certs/home/jenkins/minikube-integration/15642-7076/.minikube/certs/cert.pem (1123 bytes)
	I0114 10:57:14.718236   27483 certs.go:388] found cert: /home/jenkins/minikube-integration/15642-7076/.minikube/certs/home/jenkins/minikube-integration/15642-7076/.minikube/certs/key.pem (1679 bytes)
	I0114 10:57:14.718287   27483 certs.go:388] found cert: /home/jenkins/minikube-integration/15642-7076/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/15642-7076/.minikube/files/etc/ssl/certs/139212.pem (1708 bytes)
	I0114 10:57:14.718980   27483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/test-preload-105443/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0114 10:57:14.772325   27483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/test-preload-105443/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0114 10:57:14.805451   27483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/test-preload-105443/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0114 10:57:14.836856   27483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/test-preload-105443/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0114 10:57:14.870023   27483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-7076/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0114 10:57:14.923667   27483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-7076/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0114 10:57:14.954579   27483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-7076/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0114 10:57:14.981542   27483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-7076/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0114 10:57:15.019906   27483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-7076/.minikube/files/etc/ssl/certs/139212.pem --> /usr/share/ca-certificates/139212.pem (1708 bytes)
	I0114 10:57:15.045803   27483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-7076/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0114 10:57:15.082706   27483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-7076/.minikube/certs/13921.pem --> /usr/share/ca-certificates/13921.pem (1338 bytes)
	I0114 10:57:15.128950   27483 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0114 10:57:15.169006   27483 ssh_runner.go:195] Run: openssl version
	I0114 10:57:15.175581   27483 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/139212.pem && ln -fs /usr/share/ca-certificates/139212.pem /etc/ssl/certs/139212.pem"
	I0114 10:57:15.189393   27483 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/139212.pem
	I0114 10:57:15.208365   27483 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jan 14 10:21 /usr/share/ca-certificates/139212.pem
	I0114 10:57:15.208434   27483 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/139212.pem
	I0114 10:57:15.216455   27483 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/139212.pem /etc/ssl/certs/3ec20f2e.0"
	I0114 10:57:15.227070   27483 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0114 10:57:15.258830   27483 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0114 10:57:15.270595   27483 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jan 14 10:07 /usr/share/ca-certificates/minikubeCA.pem
	I0114 10:57:15.270650   27483 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0114 10:57:15.279993   27483 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0114 10:57:15.289388   27483 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13921.pem && ln -fs /usr/share/ca-certificates/13921.pem /etc/ssl/certs/13921.pem"
	I0114 10:57:15.300388   27483 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13921.pem
	I0114 10:57:15.305102   27483 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jan 14 10:21 /usr/share/ca-certificates/13921.pem
	I0114 10:57:15.305147   27483 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13921.pem
	I0114 10:57:15.319407   27483 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/13921.pem /etc/ssl/certs/51391683.0"
	I0114 10:57:15.344944   27483 kubeadm.go:396] StartCluster: {Name:test-preload-105443 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15235/minikube-v1.28.0-1668700269-15235-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVers
ion:v1.24.6 ClusterName:test-preload-105443 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.172 Port:8443 KubernetesVersion:v1.24.6 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9P
Version:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0114 10:57:15.345031   27483 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0114 10:57:15.345068   27483 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0114 10:57:15.396831   27483 cri.go:87] found id: "93d761323665a60c6f60a5e637528dc6a4dd02a6848672e1e233c83067a23f29"
	I0114 10:57:15.396859   27483 cri.go:87] found id: ""
	I0114 10:57:15.396895   27483 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0114 10:57:15.447928   27483 cri.go:114] JSON = [{"ociVersion":"1.0.2-dev","id":"114ee96bae199590b8bb74dd0eec9ca90439a91f7d9930b299b1e8fb023f9bb0","pid":2921,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/114ee96bae199590b8bb74dd0eec9ca90439a91f7d9930b299b1e8fb023f9bb0","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/114ee96bae199590b8bb74dd0eec9ca90439a91f7d9930b299b1e8fb023f9bb0/rootfs","created":"2023-01-14T10:57:14.735337535Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"114ee96bae199590b8bb74dd0eec9ca90439a91f7d9930b299b1e8fb023f9bb0","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-test-preload-105443_72c33a3ad2d2e5f9b0a0ed2b8f209e20","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"etcd-test-preload-105443","io.kuber
netes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"1dcdd5eb216dc2158ef601dfa8fa972eb6150125bbcb1efbc5bc2b67c043a88a","pid":2779,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1dcdd5eb216dc2158ef601dfa8fa972eb6150125bbcb1efbc5bc2b67c043a88a","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1dcdd5eb216dc2158ef601dfa8fa972eb6150125bbcb1efbc5bc2b67c043a88a/rootfs","created":"2023-01-14T10:57:13.723267075Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"1dcdd5eb216dc2158ef601dfa8fa972eb6150125bbcb1efbc5bc2b67c043a88a","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-test-preload-105443_84d1f443092d7d6e8972fbfd258f9adb","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-scheduler-test-prel
oad-105443","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"70967bd17232a49ba21cb860577c5572b47e7d3adda4236f1afbda88224c66be","pid":2786,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/70967bd17232a49ba21cb860577c5572b47e7d3adda4236f1afbda88224c66be","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/70967bd17232a49ba21cb860577c5572b47e7d3adda4236f1afbda88224c66be/rootfs","created":"2023-01-14T10:57:14.064706786Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"2","io.kubernetes.cri.sandbox-id":"70967bd17232a49ba21cb860577c5572b47e7d3adda4236f1afbda88224c66be","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-llwpq_91739d92-c705-413a-9c93-bd3ff50a4bde","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-proxy-llwpq","
io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"93d761323665a60c6f60a5e637528dc6a4dd02a6848672e1e233c83067a23f29","pid":3010,"status":"created","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/93d761323665a60c6f60a5e637528dc6a4dd02a6848672e1e233c83067a23f29","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/93d761323665a60c6f60a5e637528dc6a4dd02a6848672e1e233c83067a23f29/rootfs","created":"2023-01-14T10:57:15.435196751Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"k8s.gcr.io/etcd:3.5.3-0","io.kubernetes.cri.sandbox-id":"114ee96bae199590b8bb74dd0eec9ca90439a91f7d9930b299b1e8fb023f9bb0","io.kubernetes.cri.sandbox-name":"etcd-test-preload-105443","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"bb21f84ffa5230e20846281dc9b549e82a92b965967e6352faf140f6681e9289","pid":2765,"status":"running","bun
dle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/bb21f84ffa5230e20846281dc9b549e82a92b965967e6352faf140f6681e9289","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/bb21f84ffa5230e20846281dc9b549e82a92b965967e6352faf140f6681e9289/rootfs","created":"2023-01-14T10:57:13.717648473Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"256","io.kubernetes.cri.sandbox-id":"bb21f84ffa5230e20846281dc9b549e82a92b965967e6352faf140f6681e9289","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-test-preload-105443_8957cb515cac201172c0da126ed92840","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-apiserver-test-preload-105443","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"fcfd1315e71098a028d44b634efe12c5a3081e29b5c6ce481697829c60bec6d3","pid":2906,"s
tatus":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/fcfd1315e71098a028d44b634efe12c5a3081e29b5c6ce481697829c60bec6d3","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/fcfd1315e71098a028d44b634efe12c5a3081e29b5c6ce481697829c60bec6d3/rootfs","created":"2023-01-14T10:57:14.698581531Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"204","io.kubernetes.cri.sandbox-id":"fcfd1315e71098a028d44b634efe12c5a3081e29b5c6ce481697829c60bec6d3","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-test-preload-105443_bf9ef742a4e80f823bde6bfa4ea6ea87","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-controller-manager-test-preload-105443","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"}]
	I0114 10:57:15.448081   27483 cri.go:124] list returned 6 containers
	I0114 10:57:15.448095   27483 cri.go:127] container: {ID:114ee96bae199590b8bb74dd0eec9ca90439a91f7d9930b299b1e8fb023f9bb0 Status:running}
	I0114 10:57:15.448112   27483 cri.go:129] skipping 114ee96bae199590b8bb74dd0eec9ca90439a91f7d9930b299b1e8fb023f9bb0 - not in ps
	I0114 10:57:15.448120   27483 cri.go:127] container: {ID:1dcdd5eb216dc2158ef601dfa8fa972eb6150125bbcb1efbc5bc2b67c043a88a Status:running}
	I0114 10:57:15.448130   27483 cri.go:129] skipping 1dcdd5eb216dc2158ef601dfa8fa972eb6150125bbcb1efbc5bc2b67c043a88a - not in ps
	I0114 10:57:15.448140   27483 cri.go:127] container: {ID:70967bd17232a49ba21cb860577c5572b47e7d3adda4236f1afbda88224c66be Status:running}
	I0114 10:57:15.448150   27483 cri.go:129] skipping 70967bd17232a49ba21cb860577c5572b47e7d3adda4236f1afbda88224c66be - not in ps
	I0114 10:57:15.448160   27483 cri.go:127] container: {ID:93d761323665a60c6f60a5e637528dc6a4dd02a6848672e1e233c83067a23f29 Status:created}
	I0114 10:57:15.448169   27483 cri.go:133] skipping {93d761323665a60c6f60a5e637528dc6a4dd02a6848672e1e233c83067a23f29 created}: state = "created", want "paused"
	I0114 10:57:15.448184   27483 cri.go:127] container: {ID:bb21f84ffa5230e20846281dc9b549e82a92b965967e6352faf140f6681e9289 Status:running}
	I0114 10:57:15.448193   27483 cri.go:129] skipping bb21f84ffa5230e20846281dc9b549e82a92b965967e6352faf140f6681e9289 - not in ps
	I0114 10:57:15.448200   27483 cri.go:127] container: {ID:fcfd1315e71098a028d44b634efe12c5a3081e29b5c6ce481697829c60bec6d3 Status:running}
	I0114 10:57:15.448210   27483 cri.go:129] skipping fcfd1315e71098a028d44b634efe12c5a3081e29b5c6ce481697829c60bec6d3 - not in ps
	I0114 10:57:15.448255   27483 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0114 10:57:15.462725   27483 kubeadm.go:411] found existing configuration files, will attempt cluster restart
	I0114 10:57:15.462762   27483 kubeadm.go:627] restartCluster start
	I0114 10:57:15.462811   27483 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0114 10:57:15.474718   27483 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0114 10:57:15.475372   27483 kubeconfig.go:92] found "test-preload-105443" server: "https://192.168.39.172:8443"
	I0114 10:57:15.476281   27483 kapi.go:59] client config for test-preload-105443: &rest.Config{Host:"https://192.168.39.172:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15642-7076/.minikube/profiles/test-preload-105443/client.crt", KeyFile:"/home/jenkins/minikube-integration/15642-7076/.minikube/profiles/test-preload-105443/client.key", CAFile:"/home/jenkins/minikube-integration/15642-7076/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1888dc0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0114 10:57:15.476988   27483 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0114 10:57:15.486392   27483 kubeadm.go:594] needs reconfigure: configs differ:
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -38,7 +38,7 @@
	     dataDir: /var/lib/minikube/etcd
	     extraArgs:
	       proxy-refresh-interval: "70000"
	-kubernetesVersion: v1.24.4
	+kubernetesVersion: v1.24.6
	 networking:
	   dnsDomain: cluster.local
	   podSubnet: "10.244.0.0/16"
	
	-- /stdout --
	I0114 10:57:15.486417   27483 kubeadm.go:1114] stopping kube-system containers ...
	I0114 10:57:15.486430   27483 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0114 10:57:15.486486   27483 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0114 10:57:15.524810   27483 cri.go:87] found id: "93d761323665a60c6f60a5e637528dc6a4dd02a6848672e1e233c83067a23f29"
	I0114 10:57:15.524847   27483 cri.go:87] found id: ""
	I0114 10:57:15.524854   27483 cri.go:232] Stopping containers: [93d761323665a60c6f60a5e637528dc6a4dd02a6848672e1e233c83067a23f29]
	I0114 10:57:15.524897   27483 ssh_runner.go:195] Run: which crictl
	I0114 10:57:15.529370   27483 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop 93d761323665a60c6f60a5e637528dc6a4dd02a6848672e1e233c83067a23f29
	I0114 10:57:15.569312   27483 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0114 10:57:15.615525   27483 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0114 10:57:15.627591   27483 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Jan 14 10:55 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5658 Jan 14 10:55 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2015 Jan 14 10:55 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5602 Jan 14 10:55 /etc/kubernetes/scheduler.conf
	
	I0114 10:57:15.627641   27483 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0114 10:57:15.636337   27483 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0114 10:57:15.644489   27483 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0114 10:57:15.652442   27483 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0114 10:57:15.652495   27483 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0114 10:57:15.660754   27483 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0114 10:57:15.668520   27483 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0114 10:57:15.668569   27483 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0114 10:57:15.676769   27483 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0114 10:57:15.685489   27483 kubeadm.go:704] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0114 10:57:15.685513   27483 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0114 10:57:15.821101   27483 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0114 10:57:16.561318   27483 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0114 10:57:16.912818   27483 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0114 10:57:16.985058   27483 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0114 10:57:17.056030   27483 api_server.go:51] waiting for apiserver process to appear ...
	I0114 10:57:17.056107   27483 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:57:17.572676   27483 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:57:18.072130   27483 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:57:18.572844   27483 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:57:19.072025   27483 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:57:19.572115   27483 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:57:20.072473   27483 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:57:20.572690   27483 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:57:21.072787   27483 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:57:21.572651   27483 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:57:22.072387   27483 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:57:22.572167   27483 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:57:23.071994   27483 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:57:23.572128   27483 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:57:24.072921   27483 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:57:24.572938   27483 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:57:24.598617   27483 api_server.go:71] duration metric: took 7.542591348s to wait for apiserver process to appear ...
	I0114 10:57:24.598638   27483 api_server.go:87] waiting for apiserver healthz status ...
	I0114 10:57:24.598647   27483 api_server.go:252] Checking apiserver healthz at https://192.168.39.172:8443/healthz ...
	I0114 10:57:24.599178   27483 api_server.go:268] stopped: https://192.168.39.172:8443/healthz: Get "https://192.168.39.172:8443/healthz": dial tcp 192.168.39.172:8443: connect: connection refused
	I0114 10:57:25.100112   27483 api_server.go:252] Checking apiserver healthz at https://192.168.39.172:8443/healthz ...
	I0114 10:57:30.100745   27483 api_server.go:268] stopped: https://192.168.39.172:8443/healthz: Get "https://192.168.39.172:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0114 10:57:30.599398   27483 api_server.go:252] Checking apiserver healthz at https://192.168.39.172:8443/healthz ...
	I0114 10:57:34.404841   27483 api_server.go:278] https://192.168.39.172:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0114 10:57:34.404872   27483 api_server.go:102] status: https://192.168.39.172:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0114 10:57:34.600258   27483 api_server.go:252] Checking apiserver healthz at https://192.168.39.172:8443/healthz ...
	I0114 10:57:34.614272   27483 api_server.go:278] https://192.168.39.172:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0114 10:57:34.614309   27483 api_server.go:102] status: https://192.168.39.172:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0114 10:57:35.100171   27483 api_server.go:252] Checking apiserver healthz at https://192.168.39.172:8443/healthz ...
	I0114 10:57:35.116101   27483 api_server.go:278] https://192.168.39.172:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0114 10:57:35.116137   27483 api_server.go:102] status: https://192.168.39.172:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0114 10:57:35.600093   27483 api_server.go:252] Checking apiserver healthz at https://192.168.39.172:8443/healthz ...
	I0114 10:57:35.610768   27483 api_server.go:278] https://192.168.39.172:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0114 10:57:35.610795   27483 api_server.go:102] status: https://192.168.39.172:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0114 10:57:36.099343   27483 api_server.go:252] Checking apiserver healthz at https://192.168.39.172:8443/healthz ...
	I0114 10:57:36.106733   27483 api_server.go:278] https://192.168.39.172:8443/healthz returned 200:
	ok
	I0114 10:57:36.113306   27483 api_server.go:140] control plane version: v1.24.6
	I0114 10:57:36.113325   27483 api_server.go:130] duration metric: took 11.514682329s to wait for apiserver health ...
	I0114 10:57:36.113332   27483 cni.go:95] Creating CNI manager for ""
	I0114 10:57:36.113338   27483 cni.go:165] "kvm2" driver + containerd runtime found, recommending bridge
	I0114 10:57:36.115499   27483 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0114 10:57:36.117173   27483 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0114 10:57:36.127419   27483 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0114 10:57:36.144759   27483 system_pods.go:43] waiting for kube-system pods to appear ...
	I0114 10:57:36.153830   27483 system_pods.go:59] 7 kube-system pods found
	I0114 10:57:36.153858   27483 system_pods.go:61] "coredns-6d4b75cb6d-qrnsv" [d6b36277-faa5-4a95-8152-7c3bee0e7d0e] Running
	I0114 10:57:36.153863   27483 system_pods.go:61] "etcd-test-preload-105443" [c83b44f0-7ce9-4416-bd67-f187352b1165] Running
	I0114 10:57:36.153868   27483 system_pods.go:61] "kube-apiserver-test-preload-105443" [aad1462d-1f15-40a5-ac94-e61bf60ad44f] Pending
	I0114 10:57:36.153876   27483 system_pods.go:61] "kube-controller-manager-test-preload-105443" [d9cc4f73-5345-45fa-9330-2ddafad96428] Pending
	I0114 10:57:36.153880   27483 system_pods.go:61] "kube-proxy-llwpq" [91739d92-c705-413a-9c93-bd3ff50a4bde] Running
	I0114 10:57:36.153884   27483 system_pods.go:61] "kube-scheduler-test-preload-105443" [86084f99-09ca-4e55-a94b-8d8fbf172cfd] Pending
	I0114 10:57:36.153888   27483 system_pods.go:61] "storage-provisioner" [6605fd74-8f22-4580-a14b-c949d30b4406] Running
	I0114 10:57:36.153892   27483 system_pods.go:74] duration metric: took 9.117201ms to wait for pod list to return data ...
	I0114 10:57:36.153902   27483 node_conditions.go:102] verifying NodePressure condition ...
	I0114 10:57:36.161282   27483 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0114 10:57:36.161314   27483 node_conditions.go:123] node cpu capacity is 2
	I0114 10:57:36.161327   27483 node_conditions.go:105] duration metric: took 7.420477ms to run NodePressure ...
	I0114 10:57:36.161346   27483 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0114 10:57:36.438345   27483 kubeadm.go:763] waiting for restarted kubelet to initialise ...
	I0114 10:57:36.442581   27483 kubeadm.go:778] kubelet initialised
	I0114 10:57:36.442603   27483 kubeadm.go:779] duration metric: took 4.234305ms waiting for restarted kubelet to initialise ...
	I0114 10:57:36.442609   27483 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0114 10:57:36.449387   27483 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-qrnsv" in "kube-system" namespace to be "Ready" ...
	I0114 10:57:36.461391   27483 pod_ready.go:92] pod "coredns-6d4b75cb6d-qrnsv" in "kube-system" namespace has status "Ready":"True"
	I0114 10:57:36.461405   27483 pod_ready.go:81] duration metric: took 11.998919ms waiting for pod "coredns-6d4b75cb6d-qrnsv" in "kube-system" namespace to be "Ready" ...
	I0114 10:57:36.461412   27483 pod_ready.go:78] waiting up to 4m0s for pod "etcd-test-preload-105443" in "kube-system" namespace to be "Ready" ...
	I0114 10:57:36.466860   27483 pod_ready.go:92] pod "etcd-test-preload-105443" in "kube-system" namespace has status "Ready":"True"
	I0114 10:57:36.466880   27483 pod_ready.go:81] duration metric: took 5.461777ms waiting for pod "etcd-test-preload-105443" in "kube-system" namespace to be "Ready" ...
	I0114 10:57:36.466890   27483 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-test-preload-105443" in "kube-system" namespace to be "Ready" ...
	I0114 10:57:38.488273   27483 pod_ready.go:102] pod "kube-apiserver-test-preload-105443" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0114 10:57:40.984696   27483 pod_ready.go:92] pod "kube-apiserver-test-preload-105443" in "kube-system" namespace has status "Ready":"True"
	I0114 10:57:40.984735   27483 pod_ready.go:81] duration metric: took 4.517835335s waiting for pod "kube-apiserver-test-preload-105443" in "kube-system" namespace to be "Ready" ...
	I0114 10:57:40.984752   27483 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-test-preload-105443" in "kube-system" namespace to be "Ready" ...
	I0114 10:57:42.997298   27483 pod_ready.go:102] pod "kube-controller-manager-test-preload-105443" in "kube-system" namespace has status "Ready":"False"
	I0114 10:57:44.498318   27483 pod_ready.go:92] pod "kube-controller-manager-test-preload-105443" in "kube-system" namespace has status "Ready":"True"
	I0114 10:57:44.498351   27483 pod_ready.go:81] duration metric: took 3.513585128s waiting for pod "kube-controller-manager-test-preload-105443" in "kube-system" namespace to be "Ready" ...
	I0114 10:57:44.498363   27483 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-llwpq" in "kube-system" namespace to be "Ready" ...
	I0114 10:57:46.512881   27483 pod_ready.go:102] pod "kube-proxy-llwpq" in "kube-system" namespace has status "Ready":"False"
	I0114 10:57:47.509080   27483 pod_ready.go:97] error getting pod "kube-proxy-llwpq" in "kube-system" namespace (skipping!): pods "kube-proxy-llwpq" not found
	I0114 10:57:47.509126   27483 pod_ready.go:81] duration metric: took 3.010744638s waiting for pod "kube-proxy-llwpq" in "kube-system" namespace to be "Ready" ...
	E0114 10:57:47.509138   27483 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "kube-proxy-llwpq" in "kube-system" namespace (skipping!): pods "kube-proxy-llwpq" not found
	I0114 10:57:47.509146   27483 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-test-preload-105443" in "kube-system" namespace to be "Ready" ...
	I0114 10:57:49.526235   27483 pod_ready.go:102] pod "kube-scheduler-test-preload-105443" in "kube-system" namespace has status "Ready":"False"
	I0114 10:57:51.028129   27483 pod_ready.go:92] pod "kube-scheduler-test-preload-105443" in "kube-system" namespace has status "Ready":"True"
	I0114 10:57:51.028163   27483 pod_ready.go:81] duration metric: took 3.519009848s waiting for pod "kube-scheduler-test-preload-105443" in "kube-system" namespace to be "Ready" ...
	I0114 10:57:51.028175   27483 pod_ready.go:38] duration metric: took 14.585558728s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0114 10:57:51.028193   27483 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0114 10:57:51.039230   27483 ops.go:34] apiserver oom_adj: -16
	I0114 10:57:51.039250   27483 kubeadm.go:631] restartCluster took 35.576481485s
	I0114 10:57:51.039256   27483 kubeadm.go:398] StartCluster complete in 35.69431939s
	I0114 10:57:51.039291   27483 settings.go:142] acquiring lock: {Name:mk3038dd5af57eb60f91199b2b839c5d07056ec6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0114 10:57:51.039394   27483 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/15642-7076/kubeconfig
	I0114 10:57:51.040222   27483 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15642-7076/kubeconfig: {Name:mk46c671e06b6e8f61c0cf0252effe586db914b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0114 10:57:51.041069   27483 kapi.go:59] client config for test-preload-105443: &rest.Config{Host:"https://192.168.39.172:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15642-7076/.minikube/profiles/test-preload-105443/client.crt", KeyFile:"/home/jenkins/minikube-integration/15642-7076/.minikube/profiles/test-preload-105443/client.key", CAFile:"/home/jenkins/minikube-integration/15642-7076/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1888dc0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0114 10:57:51.044149   27483 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "test-preload-105443" rescaled to 1
	I0114 10:57:51.044196   27483 start.go:212] Will wait 6m0s for node &{Name: IP:192.168.39.172 Port:8443 KubernetesVersion:v1.24.6 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0114 10:57:51.046364   27483 out.go:177] * Verifying Kubernetes components...
	I0114 10:57:51.044247   27483 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0114 10:57:51.044265   27483 addons.go:486] enableAddons start: toEnable=map[default-storageclass:true storage-provisioner:true], additional=[]
	I0114 10:57:51.044476   27483 config.go:180] Loaded profile config "test-preload-105443": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.24.6
	I0114 10:57:51.047810   27483 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0114 10:57:51.047821   27483 addons.go:65] Setting storage-provisioner=true in profile "test-preload-105443"
	I0114 10:57:51.047855   27483 addons.go:227] Setting addon storage-provisioner=true in "test-preload-105443"
	W0114 10:57:51.047869   27483 addons.go:236] addon storage-provisioner should already be in state true
	I0114 10:57:51.047829   27483 addons.go:65] Setting default-storageclass=true in profile "test-preload-105443"
	I0114 10:57:51.047925   27483 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-105443"
	I0114 10:57:51.047936   27483 host.go:66] Checking if "test-preload-105443" exists ...
	I0114 10:57:51.048287   27483 main.go:134] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0114 10:57:51.048327   27483 main.go:134] libmachine: Launching plugin server for driver kvm2
	I0114 10:57:51.048353   27483 main.go:134] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0114 10:57:51.048390   27483 main.go:134] libmachine: Launching plugin server for driver kvm2
	I0114 10:57:51.062985   27483 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:38759
	I0114 10:57:51.063084   27483 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:43593
	I0114 10:57:51.063424   27483 main.go:134] libmachine: () Calling .GetVersion
	I0114 10:57:51.063668   27483 main.go:134] libmachine: () Calling .GetVersion
	I0114 10:57:51.063963   27483 main.go:134] libmachine: Using API Version  1
	I0114 10:57:51.063993   27483 main.go:134] libmachine: () Calling .SetConfigRaw
	I0114 10:57:51.064133   27483 main.go:134] libmachine: Using API Version  1
	I0114 10:57:51.064155   27483 main.go:134] libmachine: () Calling .SetConfigRaw
	I0114 10:57:51.064349   27483 main.go:134] libmachine: () Calling .GetMachineName
	I0114 10:57:51.064451   27483 main.go:134] libmachine: () Calling .GetMachineName
	I0114 10:57:51.064540   27483 main.go:134] libmachine: (test-preload-105443) Calling .GetState
	I0114 10:57:51.064867   27483 main.go:134] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0114 10:57:51.064912   27483 main.go:134] libmachine: Launching plugin server for driver kvm2
	I0114 10:57:51.066870   27483 kapi.go:59] client config for test-preload-105443: &rest.Config{Host:"https://192.168.39.172:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15642-7076/.minikube/profiles/test-preload-105443/client.crt", KeyFile:"/home/jenkins/minikube-integration/15642-7076/.minikube/profiles/test-preload-105443/client.key", CAFile:"/home/jenkins/minikube-integration/15642-7076/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1888dc0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0114 10:57:51.080179   27483 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:45167
	I0114 10:57:51.080584   27483 main.go:134] libmachine: () Calling .GetVersion
	I0114 10:57:51.080996   27483 main.go:134] libmachine: Using API Version  1
	I0114 10:57:51.081020   27483 main.go:134] libmachine: () Calling .SetConfigRaw
	I0114 10:57:51.081340   27483 main.go:134] libmachine: () Calling .GetMachineName
	I0114 10:57:51.081528   27483 main.go:134] libmachine: (test-preload-105443) Calling .GetState
	I0114 10:57:51.083135   27483 main.go:134] libmachine: (test-preload-105443) Calling .DriverName
	I0114 10:57:51.085165   27483 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0114 10:57:51.083919   27483 addons.go:227] Setting addon default-storageclass=true in "test-preload-105443"
	W0114 10:57:51.086598   27483 addons.go:236] addon default-storageclass should already be in state true
	I0114 10:57:51.086639   27483 host.go:66] Checking if "test-preload-105443" exists ...
	I0114 10:57:51.086709   27483 addons.go:419] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0114 10:57:51.086728   27483 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0114 10:57:51.086747   27483 main.go:134] libmachine: (test-preload-105443) Calling .GetSSHHostname
	I0114 10:57:51.086971   27483 main.go:134] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0114 10:57:51.087005   27483 main.go:134] libmachine: Launching plugin server for driver kvm2
	I0114 10:57:51.089937   27483 main.go:134] libmachine: (test-preload-105443) DBG | domain test-preload-105443 has defined MAC address 52:54:00:41:6d:81 in network mk-test-preload-105443
	I0114 10:57:51.090467   27483 main.go:134] libmachine: (test-preload-105443) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:6d:81", ip: ""} in network mk-test-preload-105443: {Iface:virbr1 ExpiryTime:2023-01-14 11:54:58 +0000 UTC Type:0 Mac:52:54:00:41:6d:81 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:test-preload-105443 Clientid:01:52:54:00:41:6d:81}
	I0114 10:57:51.090499   27483 main.go:134] libmachine: (test-preload-105443) DBG | domain test-preload-105443 has defined IP address 192.168.39.172 and MAC address 52:54:00:41:6d:81 in network mk-test-preload-105443
	I0114 10:57:51.090659   27483 main.go:134] libmachine: (test-preload-105443) Calling .GetSSHPort
	I0114 10:57:51.090832   27483 main.go:134] libmachine: (test-preload-105443) Calling .GetSSHKeyPath
	I0114 10:57:51.090971   27483 main.go:134] libmachine: (test-preload-105443) Calling .GetSSHUsername
	I0114 10:57:51.091131   27483 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15642-7076/.minikube/machines/test-preload-105443/id_rsa Username:docker}
	I0114 10:57:51.104203   27483 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:42793
	I0114 10:57:51.104594   27483 main.go:134] libmachine: () Calling .GetVersion
	I0114 10:57:51.105021   27483 main.go:134] libmachine: Using API Version  1
	I0114 10:57:51.105049   27483 main.go:134] libmachine: () Calling .SetConfigRaw
	I0114 10:57:51.105329   27483 main.go:134] libmachine: () Calling .GetMachineName
	I0114 10:57:51.105813   27483 main.go:134] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0114 10:57:51.105849   27483 main.go:134] libmachine: Launching plugin server for driver kvm2
	I0114 10:57:51.120471   27483 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:39745
	I0114 10:57:51.120848   27483 main.go:134] libmachine: () Calling .GetVersion
	I0114 10:57:51.121289   27483 main.go:134] libmachine: Using API Version  1
	I0114 10:57:51.121313   27483 main.go:134] libmachine: () Calling .SetConfigRaw
	I0114 10:57:51.121628   27483 main.go:134] libmachine: () Calling .GetMachineName
	I0114 10:57:51.121799   27483 main.go:134] libmachine: (test-preload-105443) Calling .GetState
	I0114 10:57:51.123271   27483 main.go:134] libmachine: (test-preload-105443) Calling .DriverName
	I0114 10:57:51.123503   27483 addons.go:419] installing /etc/kubernetes/addons/storageclass.yaml
	I0114 10:57:51.123520   27483 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0114 10:57:51.123535   27483 main.go:134] libmachine: (test-preload-105443) Calling .GetSSHHostname
	I0114 10:57:51.126237   27483 main.go:134] libmachine: (test-preload-105443) DBG | domain test-preload-105443 has defined MAC address 52:54:00:41:6d:81 in network mk-test-preload-105443
	I0114 10:57:51.126652   27483 main.go:134] libmachine: (test-preload-105443) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:41:6d:81", ip: ""} in network mk-test-preload-105443: {Iface:virbr1 ExpiryTime:2023-01-14 11:54:58 +0000 UTC Type:0 Mac:52:54:00:41:6d:81 Iaid: IPaddr:192.168.39.172 Prefix:24 Hostname:test-preload-105443 Clientid:01:52:54:00:41:6d:81}
	I0114 10:57:51.126684   27483 main.go:134] libmachine: (test-preload-105443) DBG | domain test-preload-105443 has defined IP address 192.168.39.172 and MAC address 52:54:00:41:6d:81 in network mk-test-preload-105443
	I0114 10:57:51.126854   27483 main.go:134] libmachine: (test-preload-105443) Calling .GetSSHPort
	I0114 10:57:51.127025   27483 main.go:134] libmachine: (test-preload-105443) Calling .GetSSHKeyPath
	I0114 10:57:51.127183   27483 main.go:134] libmachine: (test-preload-105443) Calling .GetSSHUsername
	I0114 10:57:51.127352   27483 sshutil.go:53] new ssh client: &{IP:192.168.39.172 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15642-7076/.minikube/machines/test-preload-105443/id_rsa Username:docker}
	I0114 10:57:51.229392   27483 node_ready.go:35] waiting up to 6m0s for node "test-preload-105443" to be "Ready" ...
	I0114 10:57:51.229635   27483 start.go:813] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0114 10:57:51.232142   27483 node_ready.go:49] node "test-preload-105443" has status "Ready":"True"
	I0114 10:57:51.232161   27483 node_ready.go:38] duration metric: took 2.729825ms waiting for node "test-preload-105443" to be "Ready" ...
	I0114 10:57:51.232168   27483 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0114 10:57:51.237594   27483 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6d4b75cb6d-qrnsv" in "kube-system" namespace to be "Ready" ...
	I0114 10:57:51.243277   27483 pod_ready.go:92] pod "coredns-6d4b75cb6d-qrnsv" in "kube-system" namespace has status "Ready":"True"
	I0114 10:57:51.243290   27483 pod_ready.go:81] duration metric: took 5.677803ms waiting for pod "coredns-6d4b75cb6d-qrnsv" in "kube-system" namespace to be "Ready" ...
	I0114 10:57:51.243298   27483 pod_ready.go:78] waiting up to 6m0s for pod "etcd-test-preload-105443" in "kube-system" namespace to be "Ready" ...
	I0114 10:57:51.247704   27483 pod_ready.go:92] pod "etcd-test-preload-105443" in "kube-system" namespace has status "Ready":"True"
	I0114 10:57:51.247730   27483 pod_ready.go:81] duration metric: took 4.416982ms waiting for pod "etcd-test-preload-105443" in "kube-system" namespace to be "Ready" ...
	I0114 10:57:51.247741   27483 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-test-preload-105443" in "kube-system" namespace to be "Ready" ...
	I0114 10:57:51.252735   27483 pod_ready.go:92] pod "kube-apiserver-test-preload-105443" in "kube-system" namespace has status "Ready":"True"
	I0114 10:57:51.252765   27483 pod_ready.go:81] duration metric: took 5.004597ms waiting for pod "kube-apiserver-test-preload-105443" in "kube-system" namespace to be "Ready" ...
	I0114 10:57:51.252776   27483 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-test-preload-105443" in "kube-system" namespace to be "Ready" ...
	I0114 10:57:51.263259   27483 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0114 10:57:51.278809   27483 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0114 10:57:51.424521   27483 pod_ready.go:92] pod "kube-controller-manager-test-preload-105443" in "kube-system" namespace has status "Ready":"True"
	I0114 10:57:51.424541   27483 pod_ready.go:81] duration metric: took 171.759236ms waiting for pod "kube-controller-manager-test-preload-105443" in "kube-system" namespace to be "Ready" ...
	I0114 10:57:51.424553   27483 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-r2zx5" in "kube-system" namespace to be "Ready" ...
	I0114 10:57:51.823589   27483 pod_ready.go:92] pod "kube-proxy-r2zx5" in "kube-system" namespace has status "Ready":"True"
	I0114 10:57:51.823614   27483 pod_ready.go:81] duration metric: took 399.0545ms waiting for pod "kube-proxy-r2zx5" in "kube-system" namespace to be "Ready" ...
	I0114 10:57:51.823627   27483 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-test-preload-105443" in "kube-system" namespace to be "Ready" ...
	I0114 10:57:52.149652   27483 main.go:134] libmachine: Making call to close driver server
	I0114 10:57:52.149680   27483 main.go:134] libmachine: (test-preload-105443) Calling .Close
	I0114 10:57:52.149769   27483 main.go:134] libmachine: Making call to close driver server
	I0114 10:57:52.149811   27483 main.go:134] libmachine: (test-preload-105443) Calling .Close
	I0114 10:57:52.149960   27483 main.go:134] libmachine: Successfully made call to close driver server
	I0114 10:57:52.149977   27483 main.go:134] libmachine: Making call to close connection to plugin binary
	I0114 10:57:52.149987   27483 main.go:134] libmachine: Making call to close driver server
	I0114 10:57:52.149996   27483 main.go:134] libmachine: (test-preload-105443) Calling .Close
	I0114 10:57:52.150112   27483 main.go:134] libmachine: (test-preload-105443) DBG | Closing plugin on server side
	I0114 10:57:52.150123   27483 main.go:134] libmachine: Successfully made call to close driver server
	I0114 10:57:52.150140   27483 main.go:134] libmachine: Making call to close connection to plugin binary
	I0114 10:57:52.150161   27483 main.go:134] libmachine: Making call to close driver server
	I0114 10:57:52.150175   27483 main.go:134] libmachine: (test-preload-105443) Calling .Close
	I0114 10:57:52.150237   27483 main.go:134] libmachine: Successfully made call to close driver server
	I0114 10:57:52.150249   27483 main.go:134] libmachine: Making call to close connection to plugin binary
	I0114 10:57:52.150254   27483 main.go:134] libmachine: (test-preload-105443) DBG | Closing plugin on server side
	I0114 10:57:52.150420   27483 main.go:134] libmachine: Successfully made call to close driver server
	I0114 10:57:52.150450   27483 main.go:134] libmachine: Making call to close connection to plugin binary
	I0114 10:57:52.150458   27483 main.go:134] libmachine: (test-preload-105443) DBG | Closing plugin on server side
	I0114 10:57:52.150464   27483 main.go:134] libmachine: Making call to close driver server
	I0114 10:57:52.150480   27483 main.go:134] libmachine: (test-preload-105443) Calling .Close
	I0114 10:57:52.150678   27483 main.go:134] libmachine: Successfully made call to close driver server
	I0114 10:57:52.150741   27483 main.go:134] libmachine: Making call to close connection to plugin binary
	I0114 10:57:52.154203   27483 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0114 10:57:52.155779   27483 addons.go:488] enableAddons completed in 1.111516995s
	I0114 10:57:52.223817   27483 pod_ready.go:92] pod "kube-scheduler-test-preload-105443" in "kube-system" namespace has status "Ready":"True"
	I0114 10:57:52.223845   27483 pod_ready.go:81] duration metric: took 400.209326ms waiting for pod "kube-scheduler-test-preload-105443" in "kube-system" namespace to be "Ready" ...
	I0114 10:57:52.223859   27483 pod_ready.go:38] duration metric: took 991.680111ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0114 10:57:52.223926   27483 api_server.go:51] waiting for apiserver process to appear ...
	I0114 10:57:52.223983   27483 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:57:52.239184   27483 api_server.go:71] duration metric: took 1.194962526s to wait for apiserver process to appear ...
	I0114 10:57:52.239212   27483 api_server.go:87] waiting for apiserver healthz status ...
	I0114 10:57:52.239224   27483 api_server.go:252] Checking apiserver healthz at https://192.168.39.172:8443/healthz ...
	I0114 10:57:52.244732   27483 api_server.go:278] https://192.168.39.172:8443/healthz returned 200:
	ok
	I0114 10:57:52.245772   27483 api_server.go:140] control plane version: v1.24.6
	I0114 10:57:52.245792   27483 api_server.go:130] duration metric: took 6.572562ms to wait for apiserver health ...
	I0114 10:57:52.245800   27483 system_pods.go:43] waiting for kube-system pods to appear ...
	I0114 10:57:52.426327   27483 system_pods.go:59] 7 kube-system pods found
	I0114 10:57:52.426367   27483 system_pods.go:61] "coredns-6d4b75cb6d-qrnsv" [d6b36277-faa5-4a95-8152-7c3bee0e7d0e] Running
	I0114 10:57:52.426372   27483 system_pods.go:61] "etcd-test-preload-105443" [c83b44f0-7ce9-4416-bd67-f187352b1165] Running
	I0114 10:57:52.426377   27483 system_pods.go:61] "kube-apiserver-test-preload-105443" [aad1462d-1f15-40a5-ac94-e61bf60ad44f] Running
	I0114 10:57:52.426383   27483 system_pods.go:61] "kube-controller-manager-test-preload-105443" [d9cc4f73-5345-45fa-9330-2ddafad96428] Running
	I0114 10:57:52.426390   27483 system_pods.go:61] "kube-proxy-r2zx5" [248e7f72-fa03-440c-bbd2-004eb0bfa8de] Running
	I0114 10:57:52.426396   27483 system_pods.go:61] "kube-scheduler-test-preload-105443" [86084f99-09ca-4e55-a94b-8d8fbf172cfd] Running
	I0114 10:57:52.426402   27483 system_pods.go:61] "storage-provisioner" [6605fd74-8f22-4580-a14b-c949d30b4406] Running
	I0114 10:57:52.426409   27483 system_pods.go:74] duration metric: took 180.602352ms to wait for pod list to return data ...
	I0114 10:57:52.426428   27483 default_sa.go:34] waiting for default service account to be created ...
	I0114 10:57:52.623880   27483 default_sa.go:45] found service account: "default"
	I0114 10:57:52.623906   27483 default_sa.go:55] duration metric: took 197.472804ms for default service account to be created ...
	I0114 10:57:52.623920   27483 system_pods.go:116] waiting for k8s-apps to be running ...
	I0114 10:57:52.826204   27483 system_pods.go:86] 7 kube-system pods found
	I0114 10:57:52.826241   27483 system_pods.go:89] "coredns-6d4b75cb6d-qrnsv" [d6b36277-faa5-4a95-8152-7c3bee0e7d0e] Running
	I0114 10:57:52.826247   27483 system_pods.go:89] "etcd-test-preload-105443" [c83b44f0-7ce9-4416-bd67-f187352b1165] Running
	I0114 10:57:52.826251   27483 system_pods.go:89] "kube-apiserver-test-preload-105443" [aad1462d-1f15-40a5-ac94-e61bf60ad44f] Running
	I0114 10:57:52.826259   27483 system_pods.go:89] "kube-controller-manager-test-preload-105443" [d9cc4f73-5345-45fa-9330-2ddafad96428] Running
	I0114 10:57:52.826263   27483 system_pods.go:89] "kube-proxy-r2zx5" [248e7f72-fa03-440c-bbd2-004eb0bfa8de] Running
	I0114 10:57:52.826267   27483 system_pods.go:89] "kube-scheduler-test-preload-105443" [86084f99-09ca-4e55-a94b-8d8fbf172cfd] Running
	I0114 10:57:52.826270   27483 system_pods.go:89] "storage-provisioner" [6605fd74-8f22-4580-a14b-c949d30b4406] Running
	I0114 10:57:52.826276   27483 system_pods.go:126] duration metric: took 202.352112ms to wait for k8s-apps to be running ...
	I0114 10:57:52.826282   27483 system_svc.go:44] waiting for kubelet service to be running ....
	I0114 10:57:52.826325   27483 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0114 10:57:52.839706   27483 system_svc.go:56] duration metric: took 13.415483ms WaitForService to wait for kubelet.
	I0114 10:57:52.839735   27483 kubeadm.go:573] duration metric: took 1.795518151s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0114 10:57:52.839750   27483 node_conditions.go:102] verifying NodePressure condition ...
	I0114 10:57:53.023479   27483 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0114 10:57:53.023507   27483 node_conditions.go:123] node cpu capacity is 2
	I0114 10:57:53.023517   27483 node_conditions.go:105] duration metric: took 183.763157ms to run NodePressure ...
	I0114 10:57:53.023527   27483 start.go:217] waiting for startup goroutines ...
	I0114 10:57:53.023818   27483 ssh_runner.go:195] Run: rm -f paused
	I0114 10:57:53.072958   27483 start.go:536] kubectl: 1.26.0, cluster: 1.24.6 (minor skew: 2)
	I0114 10:57:53.075182   27483 out.go:177] 
	W0114 10:57:53.076646   27483 out.go:239] ! /usr/local/bin/kubectl is version 1.26.0, which may have incompatibilities with Kubernetes 1.24.6.
	I0114 10:57:53.078220   27483 out.go:177]   - Want kubectl v1.24.6? Try 'minikube kubectl -- get pods -A'
	I0114 10:57:53.079809   27483 out.go:177] * Done! kubectl is now configured to use "test-preload-105443" cluster and "default" namespace by default
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	bae9a9cb3edc4       0bb39497ab33b       6 seconds ago       Running             kube-proxy                0                   fdc652d992931
	664155e1090a1       6e38f40d628db       6 seconds ago       Running             storage-provisioner       1                   6a158a01ab6d1
	3d2601ead597b       a4ca41631cc7a       16 seconds ago      Running             coredns                   1                   86a98a78e9201
	386806347631e       c786c777a4e1c       17 seconds ago      Running             kube-scheduler            0                   0a0b034e3e66a
	d66943d237a6b       aebe758cef4cd       24 seconds ago      Running             etcd                      2                   114ee96bae199
	5748edecba614       c6c20157a4233       28 seconds ago      Running             kube-controller-manager   0                   afc5cb9211d12
	230feee6c17df       860f263331c95       29 seconds ago      Running             kube-apiserver            0                   a9f225cfadb42
	93d761323665a       aebe758cef4cd       38 seconds ago      Exited              etcd                      1                   114ee96bae199
	
	* 
	* ==> containerd <==
	* -- Journal begins at Sat 2023-01-14 10:54:55 UTC, ends at Sat 2023-01-14 10:57:54 UTC. --
	Jan 14 10:57:47 test-preload-105443 containerd[2453]: time="2023-01-14T10:57:47.075842399Z" level=warning msg="cleaning up after shim disconnected" id=70967bd17232a49ba21cb860577c5572b47e7d3adda4236f1afbda88224c66be namespace=k8s.io
	Jan 14 10:57:47 test-preload-105443 containerd[2453]: time="2023-01-14T10:57:47.075996708Z" level=info msg="cleaning up dead shim"
	Jan 14 10:57:47 test-preload-105443 containerd[2453]: time="2023-01-14T10:57:47.094378001Z" level=warning msg="cleanup warnings time=\"2023-01-14T10:57:47Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3698 runtime=io.containerd.runc.v2\n"
	Jan 14 10:57:47 test-preload-105443 containerd[2453]: time="2023-01-14T10:57:47.095095571Z" level=info msg="TearDown network for sandbox \"70967bd17232a49ba21cb860577c5572b47e7d3adda4236f1afbda88224c66be\" successfully"
	Jan 14 10:57:47 test-preload-105443 containerd[2453]: time="2023-01-14T10:57:47.095198624Z" level=info msg="StopPodSandbox for \"70967bd17232a49ba21cb860577c5572b47e7d3adda4236f1afbda88224c66be\" returns successfully"
	Jan 14 10:57:47 test-preload-105443 containerd[2453]: time="2023-01-14T10:57:47.144717551Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:storage-provisioner,Uid:6605fd74-8f22-4580-a14b-c949d30b4406,Namespace:kube-system,Attempt:0,}"
	Jan 14 10:57:47 test-preload-105443 containerd[2453]: time="2023-01-14T10:57:47.173245211Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 14 10:57:47 test-preload-105443 containerd[2453]: time="2023-01-14T10:57:47.173405532Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 14 10:57:47 test-preload-105443 containerd[2453]: time="2023-01-14T10:57:47.173532882Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 14 10:57:47 test-preload-105443 containerd[2453]: time="2023-01-14T10:57:47.173814391Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6a158a01ab6d1d8ecc44e43209bddb915dbfcb719796b69c67437bb1c08a45ce pid=3721 runtime=io.containerd.runc.v2
	Jan 14 10:57:47 test-preload-105443 containerd[2453]: time="2023-01-14T10:57:47.616629899Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-r2zx5,Uid:248e7f72-fa03-440c-bbd2-004eb0bfa8de,Namespace:kube-system,Attempt:0,}"
	Jan 14 10:57:47 test-preload-105443 containerd[2453]: time="2023-01-14T10:57:47.650379129Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:storage-provisioner,Uid:6605fd74-8f22-4580-a14b-c949d30b4406,Namespace:kube-system,Attempt:0,} returns sandbox id \"6a158a01ab6d1d8ecc44e43209bddb915dbfcb719796b69c67437bb1c08a45ce\""
	Jan 14 10:57:47 test-preload-105443 containerd[2453]: time="2023-01-14T10:57:47.658542016Z" level=info msg="CreateContainer within sandbox \"6a158a01ab6d1d8ecc44e43209bddb915dbfcb719796b69c67437bb1c08a45ce\" for container &ContainerMetadata{Name:storage-provisioner,Attempt:1,}"
	Jan 14 10:57:47 test-preload-105443 containerd[2453]: time="2023-01-14T10:57:47.669695534Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 14 10:57:47 test-preload-105443 containerd[2453]: time="2023-01-14T10:57:47.669792844Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 14 10:57:47 test-preload-105443 containerd[2453]: time="2023-01-14T10:57:47.669802270Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 14 10:57:47 test-preload-105443 containerd[2453]: time="2023-01-14T10:57:47.670100245Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/fdc652d9929315d42e19c4c118e4753c48d42cdd74856c80c36ace83ae9e2036 pid=3766 runtime=io.containerd.runc.v2
	Jan 14 10:57:47 test-preload-105443 containerd[2453]: time="2023-01-14T10:57:47.685920829Z" level=info msg="CreateContainer within sandbox \"6a158a01ab6d1d8ecc44e43209bddb915dbfcb719796b69c67437bb1c08a45ce\" for &ContainerMetadata{Name:storage-provisioner,Attempt:1,} returns container id \"664155e1090a11bad07b6a94168b9043016feb171ba515de914ecb06fd0c8f85\""
	Jan 14 10:57:47 test-preload-105443 containerd[2453]: time="2023-01-14T10:57:47.691221562Z" level=info msg="StartContainer for \"664155e1090a11bad07b6a94168b9043016feb171ba515de914ecb06fd0c8f85\""
	Jan 14 10:57:47 test-preload-105443 containerd[2453]: time="2023-01-14T10:57:47.782122459Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-r2zx5,Uid:248e7f72-fa03-440c-bbd2-004eb0bfa8de,Namespace:kube-system,Attempt:0,} returns sandbox id \"fdc652d9929315d42e19c4c118e4753c48d42cdd74856c80c36ace83ae9e2036\""
	Jan 14 10:57:47 test-preload-105443 containerd[2453]: time="2023-01-14T10:57:47.789335184Z" level=info msg="CreateContainer within sandbox \"fdc652d9929315d42e19c4c118e4753c48d42cdd74856c80c36ace83ae9e2036\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}"
	Jan 14 10:57:47 test-preload-105443 containerd[2453]: time="2023-01-14T10:57:47.829782964Z" level=info msg="StartContainer for \"664155e1090a11bad07b6a94168b9043016feb171ba515de914ecb06fd0c8f85\" returns successfully"
	Jan 14 10:57:47 test-preload-105443 containerd[2453]: time="2023-01-14T10:57:47.835675071Z" level=info msg="CreateContainer within sandbox \"fdc652d9929315d42e19c4c118e4753c48d42cdd74856c80c36ace83ae9e2036\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"bae9a9cb3edc45cdc2f0c2f9fd9ad53d82e3c97492d5632fd7af21f805fa9ffd\""
	Jan 14 10:57:47 test-preload-105443 containerd[2453]: time="2023-01-14T10:57:47.839396252Z" level=info msg="StartContainer for \"bae9a9cb3edc45cdc2f0c2f9fd9ad53d82e3c97492d5632fd7af21f805fa9ffd\""
	Jan 14 10:57:47 test-preload-105443 containerd[2453]: time="2023-01-14T10:57:47.978629407Z" level=info msg="StartContainer for \"bae9a9cb3edc45cdc2f0c2f9fd9ad53d82e3c97492d5632fd7af21f805fa9ffd\" returns successfully"
	
	* 
	* ==> coredns [3d2601ead597bfe856431058224ec0abcc4744481797d307fa38e37060a509f8] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = 8f51b271a18f2ce6fcaee5f1cfda3ed0
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	
	* 
	* ==> describe nodes <==
	* Name:               test-preload-105443
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-105443
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=59da54e5a04973bd17dc62cf57cb4173bab7bf81
	                    minikube.k8s.io/name=test-preload-105443
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_01_14T10_55_48_0700
	                    minikube.k8s.io/version=v1.28.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 14 Jan 2023 10:55:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-105443
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 14 Jan 2023 10:57:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 14 Jan 2023 10:57:34 +0000   Sat, 14 Jan 2023 10:55:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 14 Jan 2023 10:57:34 +0000   Sat, 14 Jan 2023 10:55:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 14 Jan 2023 10:57:34 +0000   Sat, 14 Jan 2023 10:55:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 14 Jan 2023 10:57:34 +0000   Sat, 14 Jan 2023 10:55:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.172
	  Hostname:    test-preload-105443
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 a81c02a12ea74a15855eb0a6a0f839b7
	  System UUID:                a81c02a1-2ea7-4a15-855e-b0a6a0f839b7
	  Boot ID:                    dfdfd74e-80fe-49d8-8ec9-2da740146b13
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.6.9
	  Kubelet Version:            v1.24.6
	  Kube-Proxy Version:         v1.24.6
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-qrnsv                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     114s
	  kube-system                 etcd-test-preload-105443                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         2m6s
	  kube-system                 kube-apiserver-test-preload-105443             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19s
	  kube-system                 kube-controller-manager-test-preload-105443    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19s
	  kube-system                 kube-proxy-r2zx5                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7s
	  kube-system                 kube-scheduler-test-preload-105443             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19s
	  kube-system                 storage-provisioner                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         111s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 111s                   kube-proxy       
	  Normal  Starting                 6s                     kube-proxy       
	  Normal  NodeAllocatableEnforced  2m16s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 2m16s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m16s (x3 over 2m16s)  kubelet          Node test-preload-105443 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     2m16s (x3 over 2m16s)  kubelet          Node test-preload-105443 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    2m16s (x3 over 2m16s)  kubelet          Node test-preload-105443 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  2m6s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 2m6s                   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m6s                   kubelet          Node test-preload-105443 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m6s                   kubelet          Node test-preload-105443 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m6s                   kubelet          Node test-preload-105443 status is now: NodeHasSufficientPID
	  Normal  NodeReady                116s                   kubelet          Node test-preload-105443 status is now: NodeReady
	  Normal  RegisteredNode           115s                   node-controller  Node test-preload-105443 event: Registered Node test-preload-105443 in Controller
	  Normal  Starting                 37s                    kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  34s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  31s (x8 over 37s)      kubelet          Node test-preload-105443 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    31s (x8 over 37s)      kubelet          Node test-preload-105443 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     31s (x7 over 37s)      kubelet          Node test-preload-105443 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           8s                     node-controller  Node test-preload-105443 event: Registered Node test-preload-105443 in Controller
	
	* 
	* ==> dmesg <==
	* [Jan14 10:54] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.071951] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +3.866409] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.109537] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.136036] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +5.045662] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Jan14 10:55] systemd-fstab-generator[551]: Ignoring "noauto" for root device
	[  +0.106246] systemd-fstab-generator[562]: Ignoring "noauto" for root device
	[  +0.189681] systemd-fstab-generator[585]: Ignoring "noauto" for root device
	[ +29.288739] systemd-fstab-generator[989]: Ignoring "noauto" for root device
	[ +10.205938] systemd-fstab-generator[1378]: Ignoring "noauto" for root device
	[Jan14 10:56] kauditd_printk_skb: 7 callbacks suppressed
	[ +11.213698] kauditd_printk_skb: 20 callbacks suppressed
	[Jan14 10:57] systemd-fstab-generator[2370]: Ignoring "noauto" for root device
	[  +0.242305] systemd-fstab-generator[2395]: Ignoring "noauto" for root device
	[  +0.156656] systemd-fstab-generator[2421]: Ignoring "noauto" for root device
	[  +0.228990] systemd-fstab-generator[2443]: Ignoring "noauto" for root device
	[  +6.084198] systemd-fstab-generator[3077]: Ignoring "noauto" for root device
	
	* 
	* ==> etcd [93d761323665a60c6f60a5e637528dc6a4dd02a6848672e1e233c83067a23f29] <==
	* 
	* 
	* ==> etcd [d66943d237a6b9fa76d5f665aeb42ce1f1cc93ae6f558d384f8ae46ec0ff5c9b] <==
	* {"level":"info","ts":"2023-01-14T10:57:30.208Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"bbf1bb039b0d3451","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2023-01-14T10:57:30.209Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2023-01-14T10:57:30.213Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"bbf1bb039b0d3451 switched to configuration voters=(13542811178640421969)"}
	{"level":"info","ts":"2023-01-14T10:57:30.214Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"a5f5c7bb54d744d4","local-member-id":"bbf1bb039b0d3451","added-peer-id":"bbf1bb039b0d3451","added-peer-peer-urls":["https://192.168.39.172:2380"]}
	{"level":"info","ts":"2023-01-14T10:57:30.214Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-01-14T10:57:30.215Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"bbf1bb039b0d3451","initial-advertise-peer-urls":["https://192.168.39.172:2380"],"listen-peer-urls":["https://192.168.39.172:2380"],"advertise-client-urls":["https://192.168.39.172:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.172:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-01-14T10:57:30.215Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-01-14T10:57:30.215Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"a5f5c7bb54d744d4","local-member-id":"bbf1bb039b0d3451","cluster-version":"3.5"}
	{"level":"info","ts":"2023-01-14T10:57:30.215Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.172:2380"}
	{"level":"info","ts":"2023-01-14T10:57:30.215Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.172:2380"}
	{"level":"info","ts":"2023-01-14T10:57:30.215Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-01-14T10:57:31.796Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"bbf1bb039b0d3451 is starting a new election at term 2"}
	{"level":"info","ts":"2023-01-14T10:57:31.796Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"bbf1bb039b0d3451 became pre-candidate at term 2"}
	{"level":"info","ts":"2023-01-14T10:57:31.796Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"bbf1bb039b0d3451 received MsgPreVoteResp from bbf1bb039b0d3451 at term 2"}
	{"level":"info","ts":"2023-01-14T10:57:31.796Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"bbf1bb039b0d3451 became candidate at term 3"}
	{"level":"info","ts":"2023-01-14T10:57:31.796Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"bbf1bb039b0d3451 received MsgVoteResp from bbf1bb039b0d3451 at term 3"}
	{"level":"info","ts":"2023-01-14T10:57:31.796Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"bbf1bb039b0d3451 became leader at term 3"}
	{"level":"info","ts":"2023-01-14T10:57:31.796Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: bbf1bb039b0d3451 elected leader bbf1bb039b0d3451 at term 3"}
	{"level":"info","ts":"2023-01-14T10:57:31.797Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"bbf1bb039b0d3451","local-member-attributes":"{Name:test-preload-105443 ClientURLs:[https://192.168.39.172:2379]}","request-path":"/0/members/bbf1bb039b0d3451/attributes","cluster-id":"a5f5c7bb54d744d4","publish-timeout":"7s"}
	{"level":"info","ts":"2023-01-14T10:57:31.797Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-01-14T10:57:31.799Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.172:2379"}
	{"level":"info","ts":"2023-01-14T10:57:31.799Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-01-14T10:57:31.800Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-01-14T10:57:31.800Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-01-14T10:57:31.800Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	* 
	* ==> kernel <==
	*  10:57:54 up 3 min,  0 users,  load average: 1.21, 0.45, 0.17
	Linux test-preload-105443 5.10.57 #1 SMP Thu Nov 17 20:18:45 UTC 2022 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [230feee6c17df41d27d8473ecd00ccc00a5a82455941f4c899a37e0c53cf96be] <==
	* I0114 10:57:34.378134       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0114 10:57:34.378200       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0114 10:57:34.378476       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0114 10:57:34.378628       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0114 10:57:34.389527       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0114 10:57:34.438795       1 shared_informer.go:262] Caches are synced for crd-autoregister
	E0114 10:57:34.447651       1 controller.go:169] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I0114 10:57:34.448497       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0114 10:57:34.519629       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0114 10:57:34.519666       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0114 10:57:34.525991       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0114 10:57:34.526816       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0114 10:57:34.527807       1 cache.go:39] Caches are synced for autoregister controller
	I0114 10:57:34.531642       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0114 10:57:34.936614       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0114 10:57:35.336999       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0114 10:57:36.301293       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0114 10:57:36.314722       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0114 10:57:36.384083       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0114 10:57:36.408061       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0114 10:57:36.421998       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0114 10:57:46.932384       1 controller.go:611] quota admission added evaluator for: endpoints
	I0114 10:57:46.972336       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0114 10:57:46.978283       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0114 10:57:47.310690       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	* 
	* ==> kube-controller-manager [5748edecba614c98a19e53d1e5078320f834903e891b01af88393d96737b5ed7] <==
	* I0114 10:57:46.944965       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I0114 10:57:46.945232       1 shared_informer.go:262] Caches are synced for service account
	I0114 10:57:46.945315       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0114 10:57:46.945872       1 event.go:294] "Event occurred" object="test-preload-105443" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node test-preload-105443 event: Registered Node test-preload-105443 in Controller"
	I0114 10:57:46.952501       1 shared_informer.go:262] Caches are synced for job
	I0114 10:57:46.962621       1 shared_informer.go:262] Caches are synced for HPA
	I0114 10:57:46.968511       1 shared_informer.go:262] Caches are synced for bootstrap_signer
	I0114 10:57:47.006548       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-serving
	I0114 10:57:47.006795       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-client
	I0114 10:57:47.008694       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0114 10:57:47.014241       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-legacy-unknown
	I0114 10:57:47.016984       1 shared_informer.go:262] Caches are synced for PV protection
	I0114 10:57:47.017397       1 shared_informer.go:262] Caches are synced for certificate-csrapproving
	I0114 10:57:47.019183       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kube-proxy-llwpq"
	I0114 10:57:47.050541       1 shared_informer.go:262] Caches are synced for persistent volume
	I0114 10:57:47.055586       1 shared_informer.go:262] Caches are synced for expand
	I0114 10:57:47.055614       1 shared_informer.go:262] Caches are synced for attach detach
	I0114 10:57:47.094218       1 shared_informer.go:262] Caches are synced for resource quota
	I0114 10:57:47.164387       1 shared_informer.go:262] Caches are synced for disruption
	I0114 10:57:47.164498       1 disruption.go:371] Sending events to api server.
	I0114 10:57:47.172124       1 shared_informer.go:262] Caches are synced for resource quota
	I0114 10:57:47.282849       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-r2zx5"
	I0114 10:57:47.545095       1 shared_informer.go:262] Caches are synced for garbage collector
	I0114 10:57:47.545136       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0114 10:57:47.608658       1 shared_informer.go:262] Caches are synced for garbage collector
	
	* 
	* ==> kube-proxy [bae9a9cb3edc45cdc2f0c2f9fd9ad53d82e3c97492d5632fd7af21f805fa9ffd] <==
	* I0114 10:57:48.083512       1 node.go:163] Successfully retrieved node IP: 192.168.39.172
	I0114 10:57:48.083680       1 server_others.go:138] "Detected node IP" address="192.168.39.172"
	I0114 10:57:48.083796       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0114 10:57:48.135257       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0114 10:57:48.135274       1 server_others.go:206] "Using iptables Proxier"
	I0114 10:57:48.135669       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0114 10:57:48.136778       1 server.go:661] "Version info" version="v1.24.6"
	I0114 10:57:48.136822       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0114 10:57:48.138132       1 config.go:317] "Starting service config controller"
	I0114 10:57:48.138202       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0114 10:57:48.138337       1 config.go:226] "Starting endpoint slice config controller"
	I0114 10:57:48.138561       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0114 10:57:48.140222       1 config.go:444] "Starting node config controller"
	I0114 10:57:48.140256       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0114 10:57:48.238374       1 shared_informer.go:262] Caches are synced for service config
	I0114 10:57:48.239585       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0114 10:57:48.241160       1 shared_informer.go:262] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [386806347631e8ca6820b1913270cc0024734ee3aa46c7da2863a37081254fcd] <==
	* I0114 10:57:37.324007       1 serving.go:348] Generated self-signed cert in-memory
	I0114 10:57:37.674802       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.6"
	I0114 10:57:37.674921       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0114 10:57:37.683402       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0114 10:57:37.683733       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I0114 10:57:37.683975       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0114 10:57:37.684159       1 shared_informer.go:255] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0114 10:57:37.684332       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0114 10:57:37.684568       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0114 10:57:37.684721       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0114 10:57:37.684847       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0114 10:57:37.784336       1 shared_informer.go:262] Caches are synced for RequestHeaderAuthRequestController
	I0114 10:57:37.784757       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0114 10:57:37.785302       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Sat 2023-01-14 10:54:55 UTC, ends at Sat 2023-01-14 10:57:54 UTC. --
	Jan 14 10:57:36 test-preload-105443 kubelet[3083]: E0114 10:57:36.822649    3083 kuberuntime_image.go:51] "Failed to pull image" err="rpc error: code = Unknown desc = failed to pull and unpack image \"k8s.gcr.io/kube-proxy:v1.24.4\": failed to prepare extraction snapshot \"extract-800183515-fmUU sha256:3479df19c04c0f4516e7034bb7291daf7fb549f04da3393c0b786f8db240d0dc\": failed to rename: rename /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/new-2776791587 /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/34: file exists" image="k8s.gcr.io/kube-proxy:v1.24.4"
	Jan 14 10:57:36 test-preload-105443 kubelet[3083]: E0114 10:57:36.822777    3083 kuberuntime_manager.go:905] container &Container{Name:kube-proxy,Image:k8s.gcr.io/kube-proxy:v1.24.4,Command:[/usr/local/bin/kube-proxy --config=/var/lib/kube-proxy/config.conf --hostname-override=$(NODE_NAME)],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-proxy,ReadOnly:false,MountPath:/var/lib/kube-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:xtables-lock,ReadOnly:false,MountPath:/run/xtables.lock,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:lib-modules,ReadOnly:true,MountPath:/lib/modules,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-
access-rsfvn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod kube-proxy-llwpq_kube-system(91739d92-c705-413a-9c93-bd3ff50a4bde): ErrImagePull: rpc error: code = Unknown desc = failed to pull and unpack image "k8s.gcr.io/kube-proxy:v1.24.4": failed to prepare extraction snapshot "extract-800183515-fmUU sha256:3479df19c04c0f4516e7034bb7291daf7fb549f04da3393c0b786f8db240d0dc": failed to rename: rename /mnt/vda1/var/lib/conta
inerd/io.containerd.snapshotter.v1.overlayfs/snapshots/new-2776791587 /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/34: file exists
	Jan 14 10:57:36 test-preload-105443 kubelet[3083]: E0114 10:57:36.822816    3083 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-proxy\" with ErrImagePull: \"rpc error: code = Unknown desc = failed to pull and unpack image \\\"k8s.gcr.io/kube-proxy:v1.24.4\\\": failed to prepare extraction snapshot \\\"extract-800183515-fmUU sha256:3479df19c04c0f4516e7034bb7291daf7fb549f04da3393c0b786f8db240d0dc\\\": failed to rename: rename /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/new-2776791587 /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/34: file exists\"" pod="kube-system/kube-proxy-llwpq" podUID=91739d92-c705-413a-9c93-bd3ff50a4bde
	Jan 14 10:57:37 test-preload-105443 kubelet[3083]: I0114 10:57:37.149748    3083 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=84d1f443092d7d6e8972fbfd258f9adb path="/var/lib/kubelet/pods/84d1f443092d7d6e8972fbfd258f9adb/volumes"
	Jan 14 10:57:37 test-preload-105443 kubelet[3083]: I0114 10:57:37.155243    3083 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=8957cb515cac201172c0da126ed92840 path="/var/lib/kubelet/pods/8957cb515cac201172c0da126ed92840/volumes"
	Jan 14 10:57:37 test-preload-105443 kubelet[3083]: I0114 10:57:37.156762    3083 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=bf9ef742a4e80f823bde6bfa4ea6ea87 path="/var/lib/kubelet/pods/bf9ef742a4e80f823bde6bfa4ea6ea87/volumes"
	Jan 14 10:57:47 test-preload-105443 kubelet[3083]: I0114 10:57:47.116275    3083 reconciler.go:201] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rsfvn\" (UniqueName: \"kubernetes.io/projected/91739d92-c705-413a-9c93-bd3ff50a4bde-kube-api-access-rsfvn\") pod \"91739d92-c705-413a-9c93-bd3ff50a4bde\" (UID: \"91739d92-c705-413a-9c93-bd3ff50a4bde\") "
	Jan 14 10:57:47 test-preload-105443 kubelet[3083]: I0114 10:57:47.116317    3083 reconciler.go:201] "operationExecutor.UnmountVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/91739d92-c705-413a-9c93-bd3ff50a4bde-kube-proxy\") pod \"91739d92-c705-413a-9c93-bd3ff50a4bde\" (UID: \"91739d92-c705-413a-9c93-bd3ff50a4bde\") "
	Jan 14 10:57:47 test-preload-105443 kubelet[3083]: I0114 10:57:47.116335    3083 reconciler.go:201] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/91739d92-c705-413a-9c93-bd3ff50a4bde-lib-modules\") pod \"91739d92-c705-413a-9c93-bd3ff50a4bde\" (UID: \"91739d92-c705-413a-9c93-bd3ff50a4bde\") "
	Jan 14 10:57:47 test-preload-105443 kubelet[3083]: I0114 10:57:47.116361    3083 reconciler.go:201] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/91739d92-c705-413a-9c93-bd3ff50a4bde-xtables-lock\") pod \"91739d92-c705-413a-9c93-bd3ff50a4bde\" (UID: \"91739d92-c705-413a-9c93-bd3ff50a4bde\") "
	Jan 14 10:57:47 test-preload-105443 kubelet[3083]: I0114 10:57:47.116519    3083 operation_generator.go:863] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/91739d92-c705-413a-9c93-bd3ff50a4bde-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "91739d92-c705-413a-9c93-bd3ff50a4bde" (UID: "91739d92-c705-413a-9c93-bd3ff50a4bde"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Jan 14 10:57:47 test-preload-105443 kubelet[3083]: I0114 10:57:47.116952    3083 operation_generator.go:863] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/91739d92-c705-413a-9c93-bd3ff50a4bde-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "91739d92-c705-413a-9c93-bd3ff50a4bde" (UID: "91739d92-c705-413a-9c93-bd3ff50a4bde"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Jan 14 10:57:47 test-preload-105443 kubelet[3083]: W0114 10:57:47.117675    3083 empty_dir.go:519] Warning: Failed to clear quota on /var/lib/kubelet/pods/91739d92-c705-413a-9c93-bd3ff50a4bde/volumes/kubernetes.io~configmap/kube-proxy: clearQuota called, but quotas disabled
	Jan 14 10:57:47 test-preload-105443 kubelet[3083]: I0114 10:57:47.118071    3083 operation_generator.go:863] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91739d92-c705-413a-9c93-bd3ff50a4bde-kube-proxy" (OuterVolumeSpecName: "kube-proxy") pod "91739d92-c705-413a-9c93-bd3ff50a4bde" (UID: "91739d92-c705-413a-9c93-bd3ff50a4bde"). InnerVolumeSpecName "kube-proxy". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	Jan 14 10:57:47 test-preload-105443 kubelet[3083]: I0114 10:57:47.125185    3083 operation_generator.go:863] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91739d92-c705-413a-9c93-bd3ff50a4bde-kube-api-access-rsfvn" (OuterVolumeSpecName: "kube-api-access-rsfvn") pod "91739d92-c705-413a-9c93-bd3ff50a4bde" (UID: "91739d92-c705-413a-9c93-bd3ff50a4bde"). InnerVolumeSpecName "kube-api-access-rsfvn". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jan 14 10:57:47 test-preload-105443 kubelet[3083]: I0114 10:57:47.216891    3083 reconciler.go:384] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/91739d92-c705-413a-9c93-bd3ff50a4bde-lib-modules\") on node \"test-preload-105443\" DevicePath \"\""
	Jan 14 10:57:47 test-preload-105443 kubelet[3083]: I0114 10:57:47.216944    3083 reconciler.go:384] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/91739d92-c705-413a-9c93-bd3ff50a4bde-xtables-lock\") on node \"test-preload-105443\" DevicePath \"\""
	Jan 14 10:57:47 test-preload-105443 kubelet[3083]: I0114 10:57:47.216958    3083 reconciler.go:384] "Volume detached for volume \"kube-api-access-rsfvn\" (UniqueName: \"kubernetes.io/projected/91739d92-c705-413a-9c93-bd3ff50a4bde-kube-api-access-rsfvn\") on node \"test-preload-105443\" DevicePath \"\""
	Jan 14 10:57:47 test-preload-105443 kubelet[3083]: I0114 10:57:47.216974    3083 reconciler.go:384] "Volume detached for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/91739d92-c705-413a-9c93-bd3ff50a4bde-kube-proxy\") on node \"test-preload-105443\" DevicePath \"\""
	Jan 14 10:57:47 test-preload-105443 kubelet[3083]: I0114 10:57:47.306779    3083 topology_manager.go:200] "Topology Admit Handler"
	Jan 14 10:57:47 test-preload-105443 kubelet[3083]: I0114 10:57:47.418162    3083 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/248e7f72-fa03-440c-bbd2-004eb0bfa8de-xtables-lock\") pod \"kube-proxy-r2zx5\" (UID: \"248e7f72-fa03-440c-bbd2-004eb0bfa8de\") " pod="kube-system/kube-proxy-r2zx5"
	Jan 14 10:57:47 test-preload-105443 kubelet[3083]: I0114 10:57:47.418363    3083 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/248e7f72-fa03-440c-bbd2-004eb0bfa8de-lib-modules\") pod \"kube-proxy-r2zx5\" (UID: \"248e7f72-fa03-440c-bbd2-004eb0bfa8de\") " pod="kube-system/kube-proxy-r2zx5"
	Jan 14 10:57:47 test-preload-105443 kubelet[3083]: I0114 10:57:47.418485    3083 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrvgd\" (UniqueName: \"kubernetes.io/projected/248e7f72-fa03-440c-bbd2-004eb0bfa8de-kube-api-access-lrvgd\") pod \"kube-proxy-r2zx5\" (UID: \"248e7f72-fa03-440c-bbd2-004eb0bfa8de\") " pod="kube-system/kube-proxy-r2zx5"
	Jan 14 10:57:47 test-preload-105443 kubelet[3083]: I0114 10:57:47.418575    3083 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/248e7f72-fa03-440c-bbd2-004eb0bfa8de-kube-proxy\") pod \"kube-proxy-r2zx5\" (UID: \"248e7f72-fa03-440c-bbd2-004eb0bfa8de\") " pod="kube-system/kube-proxy-r2zx5"
	Jan 14 10:57:49 test-preload-105443 kubelet[3083]: I0114 10:57:49.147266    3083 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=91739d92-c705-413a-9c93-bd3ff50a4bde path="/var/lib/kubelet/pods/91739d92-c705-413a-9c93-bd3ff50a4bde/volumes"
	
	* 
	* ==> storage-provisioner [664155e1090a11bad07b6a94168b9043016feb171ba515de914ecb06fd0c8f85] <==
	* I0114 10:57:47.874252       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0114 10:57:47.893618       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0114 10:57:47.894386       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-105443 -n test-preload-105443
helpers_test.go:261: (dbg) Run:  kubectl --context test-preload-105443 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: 
helpers_test.go:272: ======> post-mortem[TestPreload]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context test-preload-105443 describe pod 
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context test-preload-105443 describe pod : exit status 1 (48.988125ms)

                                                
                                                
** stderr ** 
	error: resource name may not be empty

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context test-preload-105443 describe pod : exit status 1
helpers_test.go:175: Cleaning up "test-preload-105443" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-105443
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-105443: (1.167204291s)
--- FAIL: TestPreload (192.53s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (1730.14s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:127: (dbg) Run:  /tmp/minikube-v1.16.0.42747067.exe start -p running-upgrade-110001 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd
E0114 11:01:36.136724   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/ingress-addon-legacy-102444/client.crt: no such file or directory

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:127: (dbg) Non-zero exit: /tmp/minikube-v1.16.0.42747067.exe start -p running-upgrade-110001 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd: exit status 80 (11m43.434390774s)

                                                
                                                
-- stdout --
	* [running-upgrade-110001] minikube v1.16.0 on Ubuntu 20.04
	  - MINIKUBE_LOCATION=15642
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15642-7076/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - KUBECONFIG=/tmp/legacy_kubeconfig1336389058
	* Using the kvm2 driver based on user configuration
	* minikube 1.28.0 is available! Download it: https://github.com/kubernetes/minikube/releases/tag/v1.28.0
	* To disable this notice, run: 'minikube config set WantUpdateNotification false'
	
	* Downloading VM boot image ...
	* Starting control plane node running-upgrade-110001 in cluster running-upgrade-110001
	* Downloading Kubernetes v1.20.0 preload ...
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on containerd 1.4.3 ...
	  - Generating certificates and keys ...| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW

                                                
                                                
-- /stdout --
** stderr ** 
	    > minikube-v1.16.0.iso.sha256: 65 B / 65 B [-------------] 100.00% ? p/s 0s    > minikube-v1.16.0.iso: 94.77 KiB / 212.62 MiB [>___________] 0.04% ? p/s ?    > minikube-v1.16.0.iso: 254.77 KiB / 212.62 MiB [>__________] 0.12% ? p/s ?    > minikube-v1.16.0.iso: 638.77 KiB / 212.62 MiB [>__________] 0.29% ? p/s ?    > minikube-v1.16.0.iso: 2.44 MiB / 212.62 MiB [] 1.15% 3.91 MiB p/s ETA 53s    > minikube-v1.16.0.iso: 8.59 MiB / 212.62 MiB [] 4.04% 3.91 MiB p/s ETA 52s    > minikube-v1.16.0.iso: 12.82 MiB / 212.62 MiB [ 6.03% 3.91 MiB p/s ETA 51s    > minikube-v1.16.0.iso: 18.52 MiB / 212.62 MiB [ 8.71% 5.39 MiB p/s ETA 36s    > minikube-v1.16.0.iso: 23.91 MiB / 212.62 MiB  11.24% 5.39 MiB p/s ETA 35s    > minikube-v1.16.0.iso: 29.67 MiB / 212.62 MiB  13.96% 5.39 MiB p/s ETA 33s    > minikube-v1.16.0.iso: 35.58 MiB / 212.62 MiB  16.74% 6.87 MiB p/s ETA 25s    > minikube-v1.16.0.iso: 43.04 MiB / 212.62 MiB  20.24% 6.87 MiB p/s ETA 24s    > minikube-v1.16.0.iso: 47.02 MiB / 212.62 MiB  22.11% 6.87 MiB
p/s ETA 24s    > minikube-v1.16.0.iso: 54.33 MiB / 212.62 MiB  25.55% 8.44 MiB p/s ETA 18s    > minikube-v1.16.0.iso: 58.98 MiB / 212.62 MiB  27.74% 8.44 MiB p/s ETA 18s    > minikube-v1.16.0.iso: 66.50 MiB / 212.62 MiB  31.27% 8.44 MiB p/s ETA 17s    > minikube-v1.16.0.iso: 70.97 MiB / 212.62 MiB  33.38% 9.69 MiB p/s ETA 14s    > minikube-v1.16.0.iso: 77.45 MiB / 212.62 MiB  36.43% 9.69 MiB p/s ETA 13s    > minikube-v1.16.0.iso: 82.52 MiB / 212.62 MiB  38.81% 9.69 MiB p/s ETA 13s    > minikube-v1.16.0.iso: 89.71 MiB / 212.62 MiB  42.19% 11.08 MiB p/s ETA 11    > minikube-v1.16.0.iso: 94.23 MiB / 212.62 MiB  44.32% 11.08 MiB p/s ETA 10    > minikube-v1.16.0.iso: 100.97 MiB / 212.62 MiB  47.49% 11.08 MiB p/s ETA 1    > minikube-v1.16.0.iso: 108.29 MiB / 212.62 MiB  50.93% 12.36 MiB p/s ETA 8    > minikube-v1.16.0.iso: 112.66 MiB / 212.62 MiB  52.98% 12.36 MiB p/s ETA 8    > minikube-v1.16.0.iso: 120.05 MiB / 212.62 MiB  56.46% 12.36 MiB p/s ETA 7    > minikube-v1.16.0.iso: 124.69 MiB / 212.62 MiB  58.64% 13.3
3 MiB p/s ETA 6    > minikube-v1.16.0.iso: 129.79 MiB / 212.62 MiB  61.04% 13.33 MiB p/s ETA 6    > minikube-v1.16.0.iso: 138.51 MiB / 212.62 MiB  65.14% 13.33 MiB p/s ETA 5    > minikube-v1.16.0.iso: 142.74 MiB / 212.62 MiB  67.14% 14.41 MiB p/s ETA 4    > minikube-v1.16.0.iso: 149.67 MiB / 212.62 MiB  70.39% 14.41 MiB p/s ETA 4    > minikube-v1.16.0.iso: 152.12 MiB / 212.62 MiB  71.55% 14.41 MiB p/s ETA 4    > minikube-v1.16.0.iso: 160.00 MiB / 212.62 MiB  75.25% 15.34 MiB p/s ETA 3    > minikube-v1.16.0.iso: 167.53 MiB / 212.62 MiB  78.79% 15.34 MiB p/s ETA 2    > minikube-v1.16.0.iso: 168.00 MiB / 212.62 MiB  79.01% 15.34 MiB p/s ETA 2    > minikube-v1.16.0.iso: 175.25 MiB / 212.62 MiB  82.42% 15.99 MiB p/s ETA 2    > minikube-v1.16.0.iso: 183.32 MiB / 212.62 MiB  86.22% 15.99 MiB p/s ETA 1    > minikube-v1.16.0.iso: 189.19 MiB / 212.62 MiB  88.98% 15.99 MiB p/s ETA 1    > minikube-v1.16.0.iso: 195.93 MiB / 212.62 MiB  92.15% 17.18 MiB p/s ETA 0    > minikube-v1.16.0.iso: 203.13 MiB / 212.62 MiB  95.54% 1
7.18 MiB p/s ETA 0    > minikube-v1.16.0.iso: 208.00 MiB / 212.62 MiB  97.83% 17.18 MiB p/s ETA 0    > minikube-v1.16.0.iso: 212.62 MiB / 212.62 MiB [] 100.00% 27.56 MiB p/s 8s    > preloaded-images-k8s-v8-v1....: 94.79 KiB / 902.99 MiB [>_] 0.01% ? p/s ?    > preloaded-images-k8s-v8-v1....: 510.79 KiB / 902.99 MiB [>] 0.06% ? p/s ?    > preloaded-images-k8s-v8-v1....: 2.23 MiB / 902.99 MiB [>__] 0.25% ? p/s ?    > preloaded-images-k8s-v8-v1....: 9.26 MiB / 902.99 MiB  1.03% 15.29 MiB p/    > preloaded-images-k8s-v8-v1....: 13.36 MiB / 902.99 MiB  1.48% 15.29 MiB p    > preloaded-images-k8s-v8-v1....: 17.32 MiB / 902.99 MiB  1.92% 15.29 MiB p    > preloaded-images-k8s-v8-v1....: 25.13 MiB / 902.99 MiB  2.78% 16.01 MiB p    > preloaded-images-k8s-v8-v1....: 32.44 MiB / 902.99 MiB  3.59% 16.01 MiB p    > preloaded-images-k8s-v8-v1....: 37.86 MiB / 902.99 MiB  4.19% 16.01 MiB p    > preloaded-images-k8s-v8-v1....: 43.36 MiB / 902.99 MiB  4.80% 16.93 MiB p    > preloaded-images-k8s-v8-v1....: 51.11 MiB / 902.99 M
iB  5.66% 16.93 MiB p    > preloaded-images-k8s-v8-v1....: 58.28 MiB / 902.99 MiB  6.45% 16.93 MiB p    > preloaded-images-k8s-v8-v1....: 62.42 MiB / 902.99 MiB  6.91% 17.89 MiB p    > preloaded-images-k8s-v8-v1....: 66.31 MiB / 902.99 MiB  7.34% 17.89 MiB p    > preloaded-images-k8s-v8-v1....: 73.86 MiB / 902.99 MiB  8.18% 17.89 MiB p    > preloaded-images-k8s-v8-v1....: 81.15 MiB / 902.99 MiB  8.99% 18.75 MiB p    > preloaded-images-k8s-v8-v1....: 88.86 MiB / 902.99 MiB  9.84% 18.75 MiB p    > preloaded-images-k8s-v8-v1....: 96.40 MiB / 902.99 MiB  10.68% 18.75 MiB     > preloaded-images-k8s-v8-v1....: 103.85 MiB / 902.99 MiB  11.50% 19.98 MiB    > preloaded-images-k8s-v8-v1....: 109.95 MiB / 902.99 MiB  12.18% 19.98 MiB    > preloaded-images-k8s-v8-v1....: 117.24 MiB / 902.99 MiB  12.98% 19.98 MiB    > preloaded-images-k8s-v8-v1....: 124.96 MiB / 902.99 MiB  13.84% 20.96 MiB    > preloaded-images-k8s-v8-v1....: 132.27 MiB / 902.99 MiB  14.65% 20.96 MiB    > preloaded-images-k8s-v8-v1....: 136.75 MiB / 902.
99 MiB  15.14% 20.96 MiB    > preloaded-images-k8s-v8-v1....: 144.70 MiB / 902.99 MiB  16.02% 21.73 MiB    > preloaded-images-k8s-v8-v1....: 152.14 MiB / 902.99 MiB  16.85% 21.73 MiB    > preloaded-images-k8s-v8-v1....: 161.78 MiB / 902.99 MiB  17.92% 21.73 MiB    > preloaded-images-k8s-v8-v1....: 168.00 MiB / 902.99 MiB  18.60% 22.84 MiB    > preloaded-images-k8s-v8-v1....: 176.00 MiB / 902.99 MiB  19.49% 22.84 MiB    > preloaded-images-k8s-v8-v1....: 184.00 MiB / 902.99 MiB  20.38% 22.84 MiB    > preloaded-images-k8s-v8-v1....: 188.55 MiB / 902.99 MiB  20.88% 23.57 MiB    > preloaded-images-k8s-v8-v1....: 195.96 MiB / 902.99 MiB  21.70% 23.57 MiB    > preloaded-images-k8s-v8-v1....: 203.65 MiB / 902.99 MiB  22.55% 23.57 MiB    > preloaded-images-k8s-v8-v1....: 210.95 MiB / 902.99 MiB  23.36% 24.46 MiB    > preloaded-images-k8s-v8-v1....: 219.06 MiB / 902.99 MiB  24.26% 24.46 MiB    > preloaded-images-k8s-v8-v1....: 227.18 MiB / 902.99 MiB  25.16% 24.46 MiB    > preloaded-images-k8s-v8-v1....: 235.28 MiB / 9
02.99 MiB  26.06% 25.50 MiB    > preloaded-images-k8s-v8-v1....: 242.58 MiB / 902.99 MiB  26.86% 25.50 MiB    > preloaded-images-k8s-v8-v1....: 247.32 MiB / 902.99 MiB  27.39% 25.50 MiB    > preloaded-images-k8s-v8-v1....: 253.14 MiB / 902.99 MiB  28.03% 25.77 MiB    > preloaded-images-k8s-v8-v1....: 259.62 MiB / 902.99 MiB  28.75% 25.77 MiB    > preloaded-images-k8s-v8-v1....: 264.00 MiB / 902.99 MiB  29.24% 25.77 MiB    > preloaded-images-k8s-v8-v1....: 271.39 MiB / 902.99 MiB  30.05% 26.07 MiB    > preloaded-images-k8s-v8-v1....: 279.10 MiB / 902.99 MiB  30.91% 26.07 MiB    > preloaded-images-k8s-v8-v1....: 285.06 MiB / 902.99 MiB  31.57% 26.07 MiB    > preloaded-images-k8s-v8-v1....: 293.70 MiB / 902.99 MiB  32.52% 26.79 MiB    > preloaded-images-k8s-v8-v1....: 299.80 MiB / 902.99 MiB  33.20% 26.79 MiB    > preloaded-images-k8s-v8-v1....: 307.09 MiB / 902.99 MiB  34.01% 26.79 MiB    > preloaded-images-k8s-v8-v1....: 312.00 MiB / 902.99 MiB  34.55% 27.03 MiB    > preloaded-images-k8s-v8-v1....: 319.27 MiB
/ 902.99 MiB  35.36% 27.03 MiB    > preloaded-images-k8s-v8-v1....: 327.37 MiB / 902.99 MiB  36.25% 27.03 MiB    > preloaded-images-k8s-v8-v1....: 335.68 MiB / 902.99 MiB  37.17% 27.83 MiB    > preloaded-images-k8s-v8-v1....: 343.59 MiB / 902.99 MiB  38.05% 27.83 MiB    > preloaded-images-k8s-v8-v1....: 350.51 MiB / 902.99 MiB  38.82% 27.83 MiB    > preloaded-images-k8s-v8-v1....: 358.59 MiB / 902.99 MiB  39.71% 28.50 MiB    > preloaded-images-k8s-v8-v1....: 365.89 MiB / 902.99 MiB  40.52% 28.50 MiB    > preloaded-images-k8s-v8-v1....: 371.97 MiB / 902.99 MiB  41.19% 28.50 MiB    > preloaded-images-k8s-v8-v1....: 376.00 MiB / 902.99 MiB  41.64% 28.53 MiB    > preloaded-images-k8s-v8-v1....: 384.00 MiB / 902.99 MiB  42.53% 28.53 MiB    > preloaded-images-k8s-v8-v1....: 391.45 MiB / 902.99 MiB  43.35% 28.53 MiB    > preloaded-images-k8s-v8-v1....: 395.91 MiB / 902.99 MiB  43.84% 28.83 MiB    > preloaded-images-k8s-v8-v1....: 400.00 MiB / 902.99 MiB  44.30% 28.83 MiB    > preloaded-images-k8s-v8-v1....: 407.65 M
iB / 902.99 MiB  45.14% 28.83 MiB    > preloaded-images-k8s-v8-v1....: 415.79 MiB / 902.99 MiB  46.05% 29.11 MiB    > preloaded-images-k8s-v8-v1....: 423.09 MiB / 902.99 MiB  46.85% 29.11 MiB    > preloaded-images-k8s-v8-v1....: 428.77 MiB / 902.99 MiB  47.48% 29.11 MiB    > preloaded-images-k8s-v8-v1....: 435.67 MiB / 902.99 MiB  48.25% 29.37 MiB    > preloaded-images-k8s-v8-v1....: 443.37 MiB / 902.99 MiB  49.10% 29.37 MiB    > preloaded-images-k8s-v8-v1....: 448.34 MiB / 902.99 MiB  49.65% 29.37 MiB    > preloaded-images-k8s-v8-v1....: 455.94 MiB / 902.99 MiB  50.49% 29.65 MiB    > preloaded-images-k8s-v8-v1....: 463.64 MiB / 902.99 MiB  51.35% 29.65 MiB    > preloaded-images-k8s-v8-v1....: 468.53 MiB / 902.99 MiB  51.89% 29.65 MiB    > preloaded-images-k8s-v8-v1....: 475.83 MiB / 902.99 MiB  52.70% 29.88 MiB    > preloaded-images-k8s-v8-v1....: 483.93 MiB / 902.99 MiB  53.59% 29.88 MiB    > preloaded-images-k8s-v8-v1....: 488.12 MiB / 902.99 MiB  54.06% 29.88 MiB    > preloaded-images-k8s-v8-v1....: 496.0
9 MiB / 902.99 MiB  54.94% 30.13 MiB    > preloaded-images-k8s-v8-v1....: 497.86 MiB / 902.99 MiB  55.14% 30.13 MiB    > preloaded-images-k8s-v8-v1....: 505.47 MiB / 902.99 MiB  55.98% 30.13 MiB    > preloaded-images-k8s-v8-v1....: 512.75 MiB / 902.99 MiB  56.78% 29.98 MiB    > preloaded-images-k8s-v8-v1....: 519.01 MiB / 902.99 MiB  57.48% 29.98 MiB    > preloaded-images-k8s-v8-v1....: 525.31 MiB / 902.99 MiB  58.17% 29.98 MiB    > preloaded-images-k8s-v8-v1....: 532.60 MiB / 902.99 MiB  58.98% 30.18 MiB    > preloaded-images-k8s-v8-v1....: 537.35 MiB / 902.99 MiB  59.51% 30.18 MiB    > preloaded-images-k8s-v8-v1....: 545.17 MiB / 902.99 MiB  60.37% 30.18 MiB    > preloaded-images-k8s-v8-v1....: 552.88 MiB / 902.99 MiB  61.23% 30.41 MiB    > preloaded-images-k8s-v8-v1....: 557.90 MiB / 902.99 MiB  61.78% 30.41 MiB    > preloaded-images-k8s-v8-v1....: 561.31 MiB / 902.99 MiB  62.16% 30.41 MiB    > preloaded-images-k8s-v8-v1....: 568.75 MiB / 902.99 MiB  62.98% 30.16 MiB    > preloaded-images-k8s-v8-v1....: 57
2.95 MiB / 902.99 MiB  63.45% 30.16 MiB    > preloaded-images-k8s-v8-v1....: 578.24 MiB / 902.99 MiB  64.04% 30.16 MiB    > preloaded-images-k8s-v8-v1....: 582.57 MiB / 902.99 MiB  64.52% 29.70 MiB    > preloaded-images-k8s-v8-v1....: 590.21 MiB / 902.99 MiB  65.36% 29.70 MiB    > preloaded-images-k8s-v8-v1....: 593.50 MiB / 902.99 MiB  65.73% 29.70 MiB    > preloaded-images-k8s-v8-v1....: 601.07 MiB / 902.99 MiB  66.56% 29.77 MiB    > preloaded-images-k8s-v8-v1....: 605.35 MiB / 902.99 MiB  67.04% 29.77 MiB    > preloaded-images-k8s-v8-v1....: 612.94 MiB / 902.99 MiB  67.88% 29.77 MiB    > preloaded-images-k8s-v8-v1....: 616.46 MiB / 902.99 MiB  68.27% 29.50 MiB    > preloaded-images-k8s-v8-v1....: 623.91 MiB / 902.99 MiB  69.09% 29.50 MiB    > preloaded-images-k8s-v8-v1....: 631.21 MiB / 902.99 MiB  69.90% 29.50 MiB    > preloaded-images-k8s-v8-v1....: 637.71 MiB / 902.99 MiB  70.62% 29.89 MiB    > preloaded-images-k8s-v8-v1....: 645.02 MiB / 902.99 MiB  71.43% 29.89 MiB    > preloaded-images-k8s-v8-v1....:
649.53 MiB / 902.99 MiB  71.93% 29.89 MiB    > preloaded-images-k8s-v8-v1....: 653.55 MiB / 902.99 MiB  72.38% 29.66 MiB    > preloaded-images-k8s-v8-v1....: 659.78 MiB / 902.99 MiB  73.07% 29.66 MiB    > preloaded-images-k8s-v8-v1....: 664.92 MiB / 902.99 MiB  73.64% 29.66 MiB    > preloaded-images-k8s-v8-v1....: 671.91 MiB / 902.99 MiB  74.41% 29.72 MiB    > preloaded-images-k8s-v8-v1....: 679.14 MiB / 902.99 MiB  75.21% 29.72 MiB    > preloaded-images-k8s-v8-v1....: 680.62 MiB / 902.99 MiB  75.37% 29.72 MiB    > preloaded-images-k8s-v8-v1....: 688.05 MiB / 902.99 MiB  76.20% 29.54 MiB    > preloaded-images-k8s-v8-v1....: 692.76 MiB / 902.99 MiB  76.72% 29.54 MiB    > preloaded-images-k8s-v8-v1....: 697.89 MiB / 902.99 MiB  77.29% 29.54 MiB    > preloaded-images-k8s-v8-v1....: 702.79 MiB / 902.99 MiB  77.83% 29.22 MiB    > preloaded-images-k8s-v8-v1....: 708.30 MiB / 902.99 MiB  78.44% 29.22 MiB    > preloaded-images-k8s-v8-v1....: 712.61 MiB / 902.99 MiB  78.92% 29.22 MiB    > preloaded-images-k8s-v8-v1..
..: 717.58 MiB / 902.99 MiB  79.47% 28.92 MiB    > preloaded-images-k8s-v8-v1....: 725.38 MiB / 902.99 MiB  80.33% 28.92 MiB    > preloaded-images-k8s-v8-v1....: 733.09 MiB / 902.99 MiB  81.19% 28.92 MiB    > preloaded-images-k8s-v8-v1....: 738.26 MiB / 902.99 MiB  81.76% 29.28 MiB    > preloaded-images-k8s-v8-v1....: 745.28 MiB / 902.99 MiB  82.53% 29.28 MiB    > preloaded-images-k8s-v8-v1....: 752.58 MiB / 902.99 MiB  83.34% 29.28 MiB    > preloaded-images-k8s-v8-v1....: 757.10 MiB / 902.99 MiB  83.84% 29.42 MiB    > preloaded-images-k8s-v8-v1....: 764.35 MiB / 902.99 MiB  84.65% 29.42 MiB    > preloaded-images-k8s-v8-v1....: 770.25 MiB / 902.99 MiB  85.30% 29.42 MiB    > preloaded-images-k8s-v8-v1....: 777.77 MiB / 902.99 MiB  86.13% 29.74 MiB    > preloaded-images-k8s-v8-v1....: 783.20 MiB / 902.99 MiB  86.73% 29.74 MiB    > preloaded-images-k8s-v8-v1....: 785.63 MiB / 902.99 MiB  87.00% 29.74 MiB    > preloaded-images-k8s-v8-v1....: 790.69 MiB / 902.99 MiB  87.56% 29.21 MiB    > preloaded-images-k8s-v8-v
1....: 797.65 MiB / 902.99 MiB  88.33% 29.21 MiB    > preloaded-images-k8s-v8-v1....: 804.56 MiB / 902.99 MiB  89.10% 29.21 MiB    > preloaded-images-k8s-v8-v1....: 809.13 MiB / 902.99 MiB  89.61% 29.31 MiB    > preloaded-images-k8s-v8-v1....: 817.19 MiB / 902.99 MiB  90.50% 29.31 MiB    > preloaded-images-k8s-v8-v1....: 825.23 MiB / 902.99 MiB  91.39% 29.31 MiB    > preloaded-images-k8s-v8-v1....: 829.70 MiB / 902.99 MiB  91.88% 29.63 MiB    > preloaded-images-k8s-v8-v1....: 835.25 MiB / 902.99 MiB  92.50% 29.63 MiB    > preloaded-images-k8s-v8-v1....: 841.33 MiB / 902.99 MiB  93.17% 29.63 MiB    > preloaded-images-k8s-v8-v1....: 848.77 MiB / 902.99 MiB  94.00% 29.77 MiB    > preloaded-images-k8s-v8-v1....: 852.83 MiB / 902.99 MiB  94.44% 29.77 MiB    > preloaded-images-k8s-v8-v1....: 856.90 MiB / 902.99 MiB  94.90% 29.77 MiB    > preloaded-images-k8s-v8-v1....: 864.59 MiB / 902.99 MiB  95.75% 29.55 MiB    > preloaded-images-k8s-v8-v1....: 871.09 MiB / 902.99 MiB  96.47% 29.55 MiB    > preloaded-images-k8s-v
8-v1....: 877.00 MiB / 902.99 MiB  97.12% 29.55 MiB    > preloaded-images-k8s-v8-v1....: 883.82 MiB / 902.99 MiB  97.88% 29.71 MiB    > preloaded-images-k8s-v8-v1....: 889.49 MiB / 902.99 MiB  98.51% 29.71 MiB    > preloaded-images-k8s-v8-v1....: 896.00 MiB / 902.99 MiB  99.23% 29.71 MiB    > preloaded-images-k8s-v8-v1....: 900.12 MiB / 902.99 MiB  99.68% 29.55 MiB    > preloaded-images-k8s-v8-v1....: 902.99 MiB / 902.99 MiB  100.00% 31.33 MiE0114 11:01:45.107843   28551 vm_assets.go:127] stat("/home/jenkins/minikube-integration/15642-7076/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.0-containerd-overlay2-amd64.tar.lz4") failed: stat /home/jenkins/minikube-integration/15642-7076/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.0-containerd-overlay2-amd64.tar.lz4: no such file or directory
	E0114 11:01:45.108281   28551 vm_assets.go:127] stat("/home/jenkins/minikube-integration/15642-7076/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.0-containerd-overlay2-amd64.tar.lz4") failed: stat /home/jenkins/minikube-integration/15642-7076/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.0-containerd-overlay2-amd64.tar.lz4: no such file or directory
	    > kubectl.sha256: 64 B / 64 B [--------------------------] 100.00% ? p/s 0s    > kubeadm.sha256: 64 B / 64 B [--------------------------] 100.00% ? p/s 0s    > kubelet.sha256: 64 B / 64 B [--------------------------] 100.00% ? p/s 0s    > kubelet: 150.74 KiB / 108.69 MiB [>_______________________] 0.14% ? p/s ?    > kubeadm: 126.75 KiB / 37.40 MiB [>________________________] 0.33% ? p/s ?    > kubelet: 926.74 KiB / 108.69 MiB [>_______________________] 0.83% ? p/s ?    > kubeadm: 790.75 KiB / 37.40 MiB [>________________________] 2.06% ? p/s ?    > kubelet: 4.67 MiB / 108.69 MiB [->________________________] 4.30% ? p/s ?    > kubectl: 142.75 KiB / 38.37 MiB [>________________________] 0.36% ? p/s ?    > kubeadm: 3.88 MiB / 37.40 MiB [-->_______________________] 10.36% ? p/s ?    > kubelet: 13.29 MiB / 108.69 MiB [->_________] 12.23% 21.91 MiB p/s ETA 4s    > kubectl: 878.75 KiB / 38.37 MiB [>________________________] 2.24% ? p/s ?    > kubeadm: 11.37 MiB / 37.40 MiB [--->________] 30.39% 18.74 Mi
B p/s ETA 1s    > kubelet: 21.02 MiB / 108.69 MiB [-->________] 19.34% 21.91 MiB p/s ETA 4s    > kubectl: 4.39 MiB / 38.37 MiB [-->_______________________] 11.44% ? p/s ?    > kubeadm: 19.65 MiB / 37.40 MiB [------>_____] 52.54% 18.74 MiB p/s ETA 0s    > kubelet: 29.85 MiB / 108.69 MiB [--->_______] 27.46% 21.91 MiB p/s ETA 3s    > kubectl: 12.84 MiB / 38.37 MiB [---->_______] 33.47% 21.17 MiB p/s ETA 1s    > kubeadm: 27.94 MiB / 37.40 MiB [-------->___] 74.70% 18.74 MiB p/s ETA 0s    > kubelet: 38.06 MiB / 108.69 MiB [--->_______] 35.02% 23.16 MiB p/s ETA 3s    > kubectl: 21.56 MiB / 38.37 MiB [------>_____] 56.18% 21.17 MiB p/s ETA 0s    > kubeadm: 36.02 MiB / 37.40 MiB [----------->] 96.31% 20.18 MiB p/s ETA 0s    > kubeadm: 37.40 MiB / 37.40 MiB [---------------] 100.00% 30.48 MiB p/s 2s    > kubelet: 46.18 MiB / 108.69 MiB [---->______] 42.49% 23.16 MiB p/s ETA 2s    > kubectl: 29.61 MiB / 38.37 MiB [--------->__] 77.17% 21.17 MiB p/s ETA 0s    > kubelet: 54.30 MiB / 108.69 MiB [----->_____] 49.96% 23.16
MiB p/s ETA 2s    > kubectl: 37.96 MiB / 38.37 MiB [----------->] 98.93% 22.50 MiB p/s ETA 0s    > kubectl: 38.37 MiB / 38.37 MiB [---------------] 100.00% 31.65 MiB p/s 2s    > kubelet: 62.40 MiB / 108.69 MiB [------>____] 57.41% 24.28 MiB p/s ETA 1s    > kubelet: 69.68 MiB / 108.69 MiB [------->___] 64.11% 24.28 MiB p/s ETA 1s    > kubelet: 77.81 MiB / 108.69 MiB [------->___] 71.58% 24.28 MiB p/s ETA 1s    > kubelet: 86.99 MiB / 108.69 MiB [-------->__] 80.03% 25.36 MiB p/s ETA 0s    > kubelet: 95.24 MiB / 108.69 MiB [--------->_] 87.62% 25.36 MiB p/s ETA 0s    > kubelet: 102.95 MiB / 108.69 MiB [--------->] 94.71% 25.36 MiB p/s ETA 0s    > kubelet: 108.69 MiB / 108.69 MiB [-------------] 100.00% 37.21 MiB p/s 3s! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-e
tcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost running-upgrade-110001] and IPs [192.168.72.54 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost running-upgrade-110001] and IPs [192.168.72.54 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /run/containerd/containerd.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error
	* 
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose
	X Exiting due to GUEST_START: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	* If the above advice does not help, please let us know: 
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:127: (dbg) Run:  /tmp/minikube-v1.16.0.42747067.exe start -p running-upgrade-110001 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:127: (dbg) Non-zero exit: /tmp/minikube-v1.16.0.42747067.exe start -p running-upgrade-110001 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd: exit status 80 (9m57.993332251s)

                                                
                                                
-- stdout --
	* [running-upgrade-110001] minikube v1.16.0 on Ubuntu 20.04
	  - MINIKUBE_LOCATION=15642
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15642-7076/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - KUBECONFIG=/tmp/legacy_kubeconfig689318354
	* Using the kvm2 driver based on existing profile
	* Starting control plane node running-upgrade-110001 in cluster running-upgrade-110001
	* Downloading Kubernetes v1.20.0 preload ...
	* Updating the running kvm2 "running-upgrade-110001" VM ...
	* Preparing Kubernetes v1.20.0 on containerd 1.4.3 ...
	  - Jan 14 11:13:00 running-upgrade-110001 kubelet[4802]: E0114 11:13:00.712162    4802 pod_workers.go:191] Error syncing pod f8d3d61ad8d45c80ab92bcedbe7fdb7d ("kube-controller-manager-running-upgrade-110001_kube-system(f8d3d61ad8d45c80ab92bcedbe7fdb7d)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-running-upgrade-110001_kube-system(f8d3d61ad8d45c80ab92bcedbe7fdb7d)"
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - failed to create listener: failed to listen on 127.0.0.1:10259: listen tcp 127.0.0.1:10259: bind: address already in use
	  - failed to create listener: failed to listen on 127.0.0.1:10259: listen tcp 127.0.0.1:10259: bind: address already in use
	* Enabled addons: 
	  - failed to create listener: failed to listen on 127.0.0.1:10259: listen tcp 127.0.0.1:10259: bind: address already in use
	  - failed to create listener: failed to listen on 127.0.0.1:10259: listen tcp 127.0.0.1:10259: bind: address already in use
	  - failed to create listener: failed to listen on 127.0.0.1:10259: listen tcp 127.0.0.1:10259: bind: address already in use
	  - failed to create listener: failed to listen on 127.0.0.1:10259: listen tcp 127.0.0.1:10259: bind: address already in use
	  - failed to create listener: failed to listen on 127.0.0.1:10259: listen tcp 127.0.0.1:10259: bind: address already in use
	  - failed to create listener: failed to listen on 127.0.0.1:10259: listen tcp 127.0.0.1:10259: bind: address already in use
	  - failed to create listener: failed to listen on 127.0.0.1:10259: listen tcp 127.0.0.1:10259: bind: address already in use
	  - failed to create listener: failed to listen on 127.0.0.1:10259: listen tcp 127.0.0.1:10259: bind: address already in use
	  - failed to create listener: failed to listen on 127.0.0.1:10259: listen tcp 127.0.0.1:10259: bind: address already in use
	  - failed to create listener: failed to listen on 127.0.0.1:10259: listen tcp 127.0.0.1:10259: bind: address already in use
	  - failed to create listener: failed to listen on 127.0.0.1:10259: listen tcp 127.0.0.1:10259: bind: address already in use
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	    > preloaded-images-k8s-v8-v1....: 94.79 KiB / 902.99 MiB [>_] 0.01% ? p/s ?    > preloaded-images-k8s-v8-v1....: 494.79 KiB / 902.99 MiB [>] 0.05% ? p/s ?    > preloaded-images-k8s-v8-v1....: 2.23 MiB / 902.99 MiB [>__] 0.25% ? p/s ?    > preloaded-images-k8s-v8-v1....: 9.13 MiB / 902.99 MiB  1.01% 15.06 MiB p/    > preloaded-images-k8s-v8-v1....: 13.88 MiB / 902.99 MiB  1.54% 15.06 MiB p    > preloaded-images-k8s-v8-v1....: 20.67 MiB / 902.99 MiB  2.29% 15.06 MiB p    > preloaded-images-k8s-v8-v1....: 28.04 MiB / 902.99 MiB  3.10% 16.13 MiB p    > preloaded-images-k8s-v8-v1....: 35.83 MiB / 902.99 MiB  3.97% 16.13 MiB p    > preloaded-images-k8s-v8-v1....: 42.96 MiB / 902.99 MiB  4.76% 16.13 MiB p    > preloaded-images-k8s-v8-v1....: 47.24 MiB / 902.99 MiB  5.23% 17.15 MiB p    > preloaded-images-k8s-v8-v1....: 55.12 MiB / 902.99 MiB  6.10% 17.15 MiB p    > preloaded-images-k8s-v8-v1....: 63.63 MiB / 902.99 MiB  7.05% 17.15 MiB p    > preloaded-images-k8s-v8-v1....: 72.20 MiB / 902.99 MiB  8.00%
18.73 MiB p    > preloaded-images-k8s-v8-v1....: 80.23 MiB / 902.99 MiB  8.88% 18.73 MiB p    > preloaded-images-k8s-v8-v1....: 88.34 MiB / 902.99 MiB  9.78% 18.73 MiB p    > preloaded-images-k8s-v8-v1....: 96.82 MiB / 902.99 MiB  10.72% 20.17 MiB     > preloaded-images-k8s-v8-v1....: 104.93 MiB / 902.99 MiB  11.62% 20.17 MiB    > preloaded-images-k8s-v8-v1....: 112.24 MiB / 902.99 MiB  12.43% 20.17 MiB    > preloaded-images-k8s-v8-v1....: 120.98 MiB / 902.99 MiB  13.40% 21.46 MiB    > preloaded-images-k8s-v8-v1....: 128.46 MiB / 902.99 MiB  14.23% 21.46 MiB    > preloaded-images-k8s-v8-v1....: 135.75 MiB / 902.99 MiB  15.03% 21.46 MiB    > preloaded-images-k8s-v8-v1....: 144.62 MiB / 902.99 MiB  16.02% 22.62 MiB    > preloaded-images-k8s-v8-v1....: 153.71 MiB / 902.99 MiB  17.02% 22.62 MiB    > preloaded-images-k8s-v8-v1....: 161.27 MiB / 902.99 MiB  17.86% 22.62 MiB    > preloaded-images-k8s-v8-v1....: 168.57 MiB / 902.99 MiB  18.67% 23.74 MiB    > preloaded-images-k8s-v8-v1....: 176.28 MiB / 902.99 MiB  1
9.52% 23.74 MiB    > preloaded-images-k8s-v8-v1....: 184.00 MiB / 902.99 MiB  20.38% 23.74 MiB    > preloaded-images-k8s-v8-v1....: 191.27 MiB / 902.99 MiB  21.18% 24.65 MiB    > preloaded-images-k8s-v8-v1....: 198.98 MiB / 902.99 MiB  22.04% 24.65 MiB    > preloaded-images-k8s-v8-v1....: 207.50 MiB / 902.99 MiB  22.98% 24.65 MiB    > preloaded-images-k8s-v8-v1....: 215.57 MiB / 902.99 MiB  23.87% 25.67 MiB    > preloaded-images-k8s-v8-v1....: 223.31 MiB / 902.99 MiB  24.73% 25.67 MiB    > preloaded-images-k8s-v8-v1....: 231.80 MiB / 902.99 MiB  25.67% 25.67 MiB    > preloaded-images-k8s-v8-v1....: 239.51 MiB / 902.99 MiB  26.52% 26.59 MiB    > preloaded-images-k8s-v8-v1....: 245.51 MiB / 902.99 MiB  27.19% 26.59 MiB    > preloaded-images-k8s-v8-v1....: 251.21 MiB / 902.99 MiB  27.82% 26.59 MiB    > preloaded-images-k8s-v8-v1....: 260.73 MiB / 902.99 MiB  28.87% 27.15 MiB    > preloaded-images-k8s-v8-v1....: 268.26 MiB / 902.99 MiB  29.71% 27.15 MiB    > preloaded-images-k8s-v8-v1....: 276.00 MiB / 902.99 MiB
30.57% 27.15 MiB    > preloaded-images-k8s-v8-v1....: 284.14 MiB / 902.99 MiB  31.47% 27.92 MiB    > preloaded-images-k8s-v8-v1....: 291.97 MiB / 902.99 MiB  32.33% 27.92 MiB    > preloaded-images-k8s-v8-v1....: 300.08 MiB / 902.99 MiB  33.23% 27.92 MiB    > preloaded-images-k8s-v8-v1....: 307.64 MiB / 902.99 MiB  34.07% 28.64 MiB    > preloaded-images-k8s-v8-v1....: 314.94 MiB / 902.99 MiB  34.88% 28.64 MiB    > preloaded-images-k8s-v8-v1....: 321.83 MiB / 902.99 MiB  35.64% 28.64 MiB    > preloaded-images-k8s-v8-v1....: 328.31 MiB / 902.99 MiB  36.36% 29.02 MiB    > preloaded-images-k8s-v8-v1....: 336.84 MiB / 902.99 MiB  37.30% 29.02 MiB    > preloaded-images-k8s-v8-v1....: 345.90 MiB / 902.99 MiB  38.31% 29.02 MiB    > preloaded-images-k8s-v8-v1....: 353.47 MiB / 902.99 MiB  39.14% 29.85 MiB    > preloaded-images-k8s-v8-v1....: 360.00 MiB / 902.99 MiB  39.87% 29.85 MiB    > preloaded-images-k8s-v8-v1....: 367.27 MiB / 902.99 MiB  40.67% 29.85 MiB    > preloaded-images-k8s-v8-v1....: 374.58 MiB / 902.99
MiB  41.48% 30.20 MiB    > preloaded-images-k8s-v8-v1....: 383.08 MiB / 902.99 MiB  42.42% 30.20 MiB    > preloaded-images-k8s-v8-v1....: 390.80 MiB / 902.99 MiB  43.28% 30.20 MiB    > preloaded-images-k8s-v8-v1....: 396.79 MiB / 902.99 MiB  43.94% 30.64 MiB    > preloaded-images-k8s-v8-v1....: 403.80 MiB / 902.99 MiB  44.72% 30.64 MiB    > preloaded-images-k8s-v8-v1....: 411.92 MiB / 902.99 MiB  45.62% 30.64 MiB    > preloaded-images-k8s-v8-v1....: 419.62 MiB / 902.99 MiB  46.47% 31.11 MiB    > preloaded-images-k8s-v8-v1....: 427.73 MiB / 902.99 MiB  47.37% 31.11 MiB    > preloaded-images-k8s-v8-v1....: 435.02 MiB / 902.99 MiB  48.18% 31.11 MiB    > preloaded-images-k8s-v8-v1....: 442.74 MiB / 902.99 MiB  49.03% 31.59 MiB    > preloaded-images-k8s-v8-v1....: 450.15 MiB / 902.99 MiB  49.85% 31.59 MiB    > preloaded-images-k8s-v8-v1....: 458.57 MiB / 902.99 MiB  50.78% 31.59 MiB    > preloaded-images-k8s-v8-v1....: 466.27 MiB / 902.99 MiB  51.64% 32.08 MiB    > preloaded-images-k8s-v8-v1....: 472.68 MiB / 902.
99 MiB  52.35% 32.08 MiB    > preloaded-images-k8s-v8-v1....: 480.03 MiB / 902.99 MiB  53.16% 32.08 MiB    > preloaded-images-k8s-v8-v1....: 487.48 MiB / 902.99 MiB  53.98% 32.30 MiB    > preloaded-images-k8s-v8-v1....: 492.44 MiB / 902.99 MiB  54.53% 32.30 MiB    > preloaded-images-k8s-v8-v1....: 500.76 MiB / 902.99 MiB  55.46% 32.30 MiB    > preloaded-images-k8s-v8-v1....: 508.86 MiB / 902.99 MiB  56.35% 32.51 MiB    > preloaded-images-k8s-v8-v1....: 516.55 MiB / 902.99 MiB  57.20% 32.51 MiB    > preloaded-images-k8s-v8-v1....: 524.06 MiB / 902.99 MiB  58.04% 32.51 MiB    > preloaded-images-k8s-v8-v1....: 529.59 MiB / 902.99 MiB  58.65% 32.64 MiB    > preloaded-images-k8s-v8-v1....: 537.34 MiB / 902.99 MiB  59.51% 32.64 MiB    > preloaded-images-k8s-v8-v1....: 542.31 MiB / 902.99 MiB  60.06% 32.64 MiB    > preloaded-images-k8s-v8-v1....: 550.20 MiB / 902.99 MiB  60.93% 32.75 MiB    > preloaded-images-k8s-v8-v1....: 556.97 MiB / 902.99 MiB  61.68% 32.75 MiB    > preloaded-images-k8s-v8-v1....: 564.41 MiB / 9
02.99 MiB  62.50% 32.75 MiB    > preloaded-images-k8s-v8-v1....: 572.52 MiB / 902.99 MiB  63.40% 33.04 MiB    > preloaded-images-k8s-v8-v1....: 579.83 MiB / 902.99 MiB  64.21% 33.04 MiB    > preloaded-images-k8s-v8-v1....: 587.53 MiB / 902.99 MiB  65.06% 33.04 MiB    > preloaded-images-k8s-v8-v1....: 594.84 MiB / 902.99 MiB  65.87% 33.31 MiB    > preloaded-images-k8s-v8-v1....: 600.50 MiB / 902.99 MiB  66.50% 33.31 MiB    > preloaded-images-k8s-v8-v1....: 607.81 MiB / 902.99 MiB  67.31% 33.31 MiB    > preloaded-images-k8s-v8-v1....: 612.69 MiB / 902.99 MiB  67.85% 33.08 MiB    > preloaded-images-k8s-v8-v1....: 620.40 MiB / 902.99 MiB  68.70% 33.08 MiB    > preloaded-images-k8s-v8-v1....: 626.95 MiB / 902.99 MiB  69.43% 33.08 MiB    > preloaded-images-k8s-v8-v1....: 634.50 MiB / 902.99 MiB  70.27% 33.29 MiB    > preloaded-images-k8s-v8-v1....: 641.09 MiB / 902.99 MiB  71.00% 33.29 MiB    > preloaded-images-k8s-v8-v1....: 649.08 MiB / 902.99 MiB  71.88% 33.29 MiB    > preloaded-images-k8s-v8-v1....: 657.32 MiB
/ 902.99 MiB  72.79% 33.60 MiB    > preloaded-images-k8s-v8-v1....: 661.94 MiB / 902.99 MiB  73.31% 33.60 MiB    > preloaded-images-k8s-v8-v1....: 669.50 MiB / 902.99 MiB  74.14% 33.60 MiB    > preloaded-images-k8s-v8-v1....: 676.80 MiB / 902.99 MiB  74.95% 33.52 MiB    > preloaded-images-k8s-v8-v1....: 684.13 MiB / 902.99 MiB  75.76% 33.52 MiB    > preloaded-images-k8s-v8-v1....: 689.31 MiB / 902.99 MiB  76.34% 33.52 MiB    > preloaded-images-k8s-v8-v1....: 696.94 MiB / 902.99 MiB  77.18% 33.53 MiB    > preloaded-images-k8s-v8-v1....: 705.21 MiB / 902.99 MiB  78.10% 33.53 MiB    > preloaded-images-k8s-v8-v1....: 711.72 MiB / 902.99 MiB  78.82% 33.53 MiB    > preloaded-images-k8s-v8-v1....: 717.36 MiB / 902.99 MiB  79.44% 33.56 MiB    > preloaded-images-k8s-v8-v1....: 721.83 MiB / 902.99 MiB  79.94% 33.56 MiB    > preloaded-images-k8s-v8-v1....: 728.84 MiB / 902.99 MiB  80.71% 33.56 MiB    > preloaded-images-k8s-v8-v1....: 737.24 MiB / 902.99 MiB  81.64% 33.53 MiB    > preloaded-images-k8s-v8-v1....: 744.02 M
iB / 902.99 MiB  82.40% 33.53 MiB    > preloaded-images-k8s-v8-v1....: 751.08 MiB / 902.99 MiB  83.18% 33.53 MiB    > preloaded-images-k8s-v8-v1....: 756.73 MiB / 902.99 MiB  83.80% 33.46 MiB    > preloaded-images-k8s-v8-v1....: 764.48 MiB / 902.99 MiB  84.66% 33.46 MiB    > preloaded-images-k8s-v8-v1....: 769.27 MiB / 902.99 MiB  85.19% 33.46 MiB    > preloaded-images-k8s-v8-v1....: 776.00 MiB / 902.99 MiB  85.94% 33.38 MiB    > preloaded-images-k8s-v8-v1....: 783.89 MiB / 902.99 MiB  86.81% 33.38 MiB    > preloaded-images-k8s-v8-v1....: 790.00 MiB / 902.99 MiB  87.49% 33.38 MiB    > preloaded-images-k8s-v8-v1....: 797.01 MiB / 902.99 MiB  88.26% 33.48 MiB    > preloaded-images-k8s-v8-v1....: 803.79 MiB / 902.99 MiB  89.01% 33.48 MiB    > preloaded-images-k8s-v8-v1....: 811.87 MiB / 902.99 MiB  89.91% 33.48 MiB    > preloaded-images-k8s-v8-v1....: 819.17 MiB / 902.99 MiB  90.72% 33.71 MiB    > preloaded-images-k8s-v8-v1....: 827.28 MiB / 902.99 MiB  91.62% 33.71 MiB    > preloaded-images-k8s-v8-v1....: 832.6
9 MiB / 902.99 MiB  92.21% 33.71 MiB    > preloaded-images-k8s-v8-v1....: 840.00 MiB / 902.99 MiB  93.02% 33.77 MiB    > preloaded-images-k8s-v8-v1....: 845.67 MiB / 902.99 MiB  93.65% 33.77 MiB    > preloaded-images-k8s-v8-v1....: 853.27 MiB / 902.99 MiB  94.49% 33.77 MiB    > preloaded-images-k8s-v8-v1....: 859.75 MiB / 902.99 MiB  95.21% 33.71 MiB    > preloaded-images-k8s-v8-v1....: 865.13 MiB / 902.99 MiB  95.81% 33.71 MiB    > preloaded-images-k8s-v8-v1....: 872.65 MiB / 902.99 MiB  96.64% 33.71 MiB    > preloaded-images-k8s-v8-v1....: 880.41 MiB / 902.99 MiB  97.50% 33.76 MiB    > preloaded-images-k8s-v8-v1....: 887.71 MiB / 902.99 MiB  98.31% 33.76 MiB    > preloaded-images-k8s-v8-v1....: 896.00 MiB / 902.99 MiB  99.23% 33.76 MiB    > preloaded-images-k8s-v8-v1....: 899.89 MiB / 902.99 MiB  99.66% 33.68 MiB    > preloaded-images-k8s-v8-v1....: 902.99 MiB / 902.99 MiB  100.00% 35.77 MiX Problems detected in kubelet:
	X Problems detected in kube-scheduler [abaaa768b3cf5f4a36601cf530b7184b3d3a5eeaab14255ef5b100de8e6ba253]:
	X Problems detected in kube-scheduler [abaaa768b3cf5f4a36601cf530b7184b3d3a5eeaab14255ef5b100de8e6ba253]:
	! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	X Problems detected in kube-scheduler [abaaa768b3cf5f4a36601cf530b7184b3d3a5eeaab14255ef5b100de8e6ba253]:
	X Problems detected in kube-scheduler [abaaa768b3cf5f4a36601cf530b7184b3d3a5eeaab14255ef5b100de8e6ba253]:
	X Problems detected in kube-scheduler [abaaa768b3cf5f4a36601cf530b7184b3d3a5eeaab14255ef5b100de8e6ba253]:
	X Problems detected in kube-scheduler [abaaa768b3cf5f4a36601cf530b7184b3d3a5eeaab14255ef5b100de8e6ba253]:
	X Problems detected in kube-scheduler [abaaa768b3cf5f4a36601cf530b7184b3d3a5eeaab14255ef5b100de8e6ba253]:
	X Problems detected in kube-scheduler [abaaa768b3cf5f4a36601cf530b7184b3d3a5eeaab14255ef5b100de8e6ba253]:
	X Problems detected in kube-scheduler [abaaa768b3cf5f4a36601cf530b7184b3d3a5eeaab14255ef5b100de8e6ba253]:
	X Problems detected in kube-scheduler [abaaa768b3cf5f4a36601cf530b7184b3d3a5eeaab14255ef5b100de8e6ba253]:
	X Problems detected in kube-scheduler [abaaa768b3cf5f4a36601cf530b7184b3d3a5eeaab14255ef5b100de8e6ba253]:
	X Problems detected in kube-scheduler [abaaa768b3cf5f4a36601cf530b7184b3d3a5eeaab14255ef5b100de8e6ba253]:
	X Problems detected in kube-scheduler [abaaa768b3cf5f4a36601cf530b7184b3d3a5eeaab14255ef5b100de8e6ba253]:
	X Exiting due to GUEST_START: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: timed out waiting for the condition
	* 
	* If the above advice does not help, please let us know: 
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:127: (dbg) Run:  /tmp/minikube-v1.16.0.42747067.exe start -p running-upgrade-110001 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd
E0114 11:21:53.343943   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/calico-110401/client.crt: no such file or directory
E0114 11:21:53.579300   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/cilium-110401/client.crt: no such file or directory
E0114 11:22:00.968564   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/custom-flannel-110401/client.crt: no such file or directory
E0114 11:22:16.863202   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/bridge-110401/client.crt: no such file or directory
E0114 11:22:28.652882   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/custom-flannel-110401/client.crt: no such file or directory
E0114 11:23:15.264995   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/calico-110401/client.crt: no such file or directory
E0114 11:23:38.784093   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/bridge-110401/client.crt: no such file or directory
E0114 11:23:41.012756   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/flannel-110401/client.crt: no such file or directory
E0114 11:23:52.030976   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/functional-102121/client.crt: no such file or directory
E0114 11:23:56.192058   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/enable-default-cni-110401/client.crt: no such file or directory
E0114 11:24:02.261655   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/auto-110400/client.crt: no such file or directory
E0114 11:24:08.695839   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/flannel-110401/client.crt: no such file or directory
E0114 11:24:23.875508   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/enable-default-cni-110401/client.crt: no such file or directory
E0114 11:24:28.384978   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/addons-100659/client.crt: no such file or directory

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:127: (dbg) Non-zero exit: /tmp/minikube-v1.16.0.42747067.exe start -p running-upgrade-110001 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd: exit status 80 (7m1.035484493s)

                                                
                                                
-- stdout --
	* [running-upgrade-110001] minikube v1.16.0 on Ubuntu 20.04
	  - MINIKUBE_LOCATION=15642
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15642-7076/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - KUBECONFIG=/tmp/legacy_kubeconfig2796853625
	* Using the kvm2 driver based on existing profile
	* Starting control plane node running-upgrade-110001 in cluster running-upgrade-110001
	* Downloading Kubernetes v1.20.0 preload ...
	* Updating the running kvm2 "running-upgrade-110001" VM ...
	* Preparing Kubernetes v1.20.0 on containerd 1.4.3 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	    > preloaded-images-k8s-v8-v1....: 94.79 KiB / 902.99 MiB [>_] 0.01% ? p/s ?    > preloaded-images-k8s-v8-v1....: 510.79 KiB / 902.99 MiB [>] 0.06% ? p/s ?    > preloaded-images-k8s-v8-v1....: 2.22 MiB / 902.99 MiB [>__] 0.25% ? p/s ?    > preloaded-images-k8s-v8-v1....: 9.08 MiB / 902.99 MiB  1.01% 14.98 MiB p/    > preloaded-images-k8s-v8-v1....: 17.01 MiB / 902.99 MiB  1.88% 14.98 MiB p    > preloaded-images-k8s-v8-v1....: 25.10 MiB / 902.99 MiB  2.78% 14.98 MiB p    > preloaded-images-k8s-v8-v1....: 33.22 MiB / 902.99 MiB  3.68% 16.61 MiB p    > preloaded-images-k8s-v8-v1....: 39.33 MiB / 902.99 MiB  4.36% 16.61 MiB p    > preloaded-images-k8s-v8-v1....: 47.01 MiB / 902.99 MiB  5.21% 16.61 MiB p    > preloaded-images-k8s-v8-v1....: 51.92 MiB / 902.99 MiB  5.75% 17.55 MiB p    > preloaded-images-k8s-v8-v1....: 59.59 MiB / 902.99 MiB  6.60% 17.55 MiB p    > preloaded-images-k8s-v8-v1....: 67.30 MiB / 902.99 MiB  7.45% 17.55 MiB p    > preloaded-images-k8s-v8-v1....: 75.00 MiB / 902.99 MiB  8.31%
18.90 MiB p    > preloaded-images-k8s-v8-v1....: 83.09 MiB / 902.99 MiB  9.20% 18.90 MiB p    > preloaded-images-k8s-v8-v1....: 91.93 MiB / 902.99 MiB  10.18% 18.90 MiB     > preloaded-images-k8s-v8-v1....: 100.47 MiB / 902.99 MiB  11.13% 20.42 MiB    > preloaded-images-k8s-v8-v1....: 108.58 MiB / 902.99 MiB  12.02% 20.42 MiB    > preloaded-images-k8s-v8-v1....: 117.07 MiB / 902.99 MiB  12.96% 20.42 MiB    > preloaded-images-k8s-v8-v1....: 124.01 MiB / 902.99 MiB  13.73% 21.63 MiB    > preloaded-images-k8s-v8-v1....: 132.06 MiB / 902.99 MiB  14.62% 21.63 MiB    > preloaded-images-k8s-v8-v1....: 138.52 MiB / 902.99 MiB  15.34% 21.63 MiB    > preloaded-images-k8s-v8-v1....: 146.85 MiB / 902.99 MiB  16.26% 22.69 MiB    > preloaded-images-k8s-v8-v1....: 155.89 MiB / 902.99 MiB  17.26% 22.69 MiB    > preloaded-images-k8s-v8-v1....: 163.27 MiB / 902.99 MiB  18.08% 22.69 MiB    > preloaded-images-k8s-v8-v1....: 170.57 MiB / 902.99 MiB  18.89% 23.78 MiB    > preloaded-images-k8s-v8-v1....: 176.52 MiB / 902.99 MiB  1
9.55% 23.78 MiB    > preloaded-images-k8s-v8-v1....: 183.70 MiB / 902.99 MiB  20.34% 23.78 MiB    > preloaded-images-k8s-v8-v1....: 191.25 MiB / 902.99 MiB  21.18% 24.47 MiB    > preloaded-images-k8s-v8-v1....: 199.54 MiB / 902.99 MiB  22.10% 24.47 MiB    > preloaded-images-k8s-v8-v1....: 208.00 MiB / 902.99 MiB  23.03% 24.47 MiB    > preloaded-images-k8s-v8-v1....: 216.00 MiB / 902.99 MiB  23.92% 25.55 MiB    > preloaded-images-k8s-v8-v1....: 221.58 MiB / 902.99 MiB  24.54% 25.55 MiB    > preloaded-images-k8s-v8-v1....: 228.94 MiB / 902.99 MiB  25.35% 25.55 MiB    > preloaded-images-k8s-v8-v1....: 237.47 MiB / 902.99 MiB  26.30% 26.21 MiB    > preloaded-images-k8s-v8-v1....: 244.77 MiB / 902.99 MiB  27.11% 26.21 MiB    > preloaded-images-k8s-v8-v1....: 253.02 MiB / 902.99 MiB  28.02% 26.21 MiB    > preloaded-images-k8s-v8-v1....: 262.95 MiB / 902.99 MiB  29.12% 27.26 MiB    > preloaded-images-k8s-v8-v1....: 270.30 MiB / 902.99 MiB  29.93% 27.26 MiB    > preloaded-images-k8s-v8-v1....: 277.26 MiB / 902.99 MiB
30.70% 27.26 MiB    > preloaded-images-k8s-v8-v1....: 287.10 MiB / 902.99 MiB  31.79% 28.10 MiB    > preloaded-images-k8s-v8-v1....: 293.26 MiB / 902.99 MiB  32.48% 28.10 MiB    > preloaded-images-k8s-v8-v1....: 302.80 MiB / 902.99 MiB  33.53% 28.10 MiB    > preloaded-images-k8s-v8-v1....: 310.80 MiB / 902.99 MiB  34.42% 28.84 MiB    > preloaded-images-k8s-v8-v1....: 318.50 MiB / 902.99 MiB  35.27% 28.84 MiB    > preloaded-images-k8s-v8-v1....: 324.59 MiB / 902.99 MiB  35.95% 28.84 MiB    > preloaded-images-k8s-v8-v1....: 333.49 MiB / 902.99 MiB  36.93% 29.41 MiB    > preloaded-images-k8s-v8-v1....: 341.59 MiB / 902.99 MiB  37.83% 29.41 MiB    > preloaded-images-k8s-v8-v1....: 350.09 MiB / 902.99 MiB  38.77% 29.41 MiB    > preloaded-images-k8s-v8-v1....: 358.20 MiB / 902.99 MiB  39.67% 30.17 MiB    > preloaded-images-k8s-v8-v1....: 365.90 MiB / 902.99 MiB  40.52% 30.17 MiB    > preloaded-images-k8s-v8-v1....: 372.61 MiB / 902.99 MiB  41.26% 30.17 MiB    > preloaded-images-k8s-v8-v1....: 382.45 MiB / 902.99
MiB  42.35% 30.83 MiB    > preloaded-images-k8s-v8-v1....: 390.25 MiB / 902.99 MiB  43.22% 30.83 MiB    > preloaded-images-k8s-v8-v1....: 397.54 MiB / 902.99 MiB  44.02% 30.83 MiB    > preloaded-images-k8s-v8-v1....: 406.79 MiB / 902.99 MiB  45.05% 31.46 MiB    > preloaded-images-k8s-v8-v1....: 414.59 MiB / 902.99 MiB  45.91% 31.46 MiB    > preloaded-images-k8s-v8-v1....: 422.65 MiB / 902.99 MiB  46.81% 31.46 MiB    > preloaded-images-k8s-v8-v1....: 429.98 MiB / 902.99 MiB  47.62% 31.93 MiB    > preloaded-images-k8s-v8-v1....: 437.84 MiB / 902.99 MiB  48.49% 31.93 MiB    > preloaded-images-k8s-v8-v1....: 443.76 MiB / 902.99 MiB  49.14% 31.93 MiB    > preloaded-images-k8s-v8-v1....: 451.07 MiB / 902.99 MiB  49.95% 32.13 MiB    > preloaded-images-k8s-v8-v1....: 458.76 MiB / 902.99 MiB  50.80% 32.13 MiB    > preloaded-images-k8s-v8-v1....: 466.46 MiB / 902.99 MiB  51.66% 32.13 MiB    > preloaded-images-k8s-v8-v1....: 473.80 MiB / 902.99 MiB  52.47% 32.51 MiB    > preloaded-images-k8s-v8-v1....: 482.18 MiB / 902.
99 MiB  53.40% 32.51 MiB    > preloaded-images-k8s-v8-v1....: 489.95 MiB / 902.99 MiB  54.26% 32.51 MiB    > preloaded-images-k8s-v8-v1....: 498.50 MiB / 902.99 MiB  55.21% 33.06 MiB    > preloaded-images-k8s-v8-v1....: 506.58 MiB / 902.99 MiB  56.10% 33.06 MiB    > preloaded-images-k8s-v8-v1....: 512.00 MiB / 902.99 MiB  56.70% 33.06 MiB    > preloaded-images-k8s-v8-v1....: 516.58 MiB / 902.99 MiB  57.21% 32.87 MiB    > preloaded-images-k8s-v8-v1....: 524.43 MiB / 902.99 MiB  58.08% 32.87 MiB    > preloaded-images-k8s-v8-v1....: 530.00 MiB / 902.99 MiB  58.69% 32.87 MiB    > preloaded-images-k8s-v8-v1....: 537.40 MiB / 902.99 MiB  59.51% 32.99 MiB    > preloaded-images-k8s-v8-v1....: 543.07 MiB / 902.99 MiB  60.14% 32.99 MiB    > preloaded-images-k8s-v8-v1....: 550.79 MiB / 902.99 MiB  61.00% 32.99 MiB    > preloaded-images-k8s-v8-v1....: 557.71 MiB / 902.99 MiB  61.76% 33.05 MiB    > preloaded-images-k8s-v8-v1....: 565.04 MiB / 902.99 MiB  62.57% 33.05 MiB    > preloaded-images-k8s-v8-v1....: 573.09 MiB / 9
02.99 MiB  63.47% 33.05 MiB    > preloaded-images-k8s-v8-v1....: 580.83 MiB / 902.99 MiB  64.32% 33.40 MiB    > preloaded-images-k8s-v8-v1....: 588.53 MiB / 902.99 MiB  65.18% 33.40 MiB    > preloaded-images-k8s-v8-v1....: 593.57 MiB / 902.99 MiB  65.73% 33.40 MiB    > preloaded-images-k8s-v8-v1....: 601.14 MiB / 902.99 MiB  66.57% 33.43 MiB    > preloaded-images-k8s-v8-v1....: 608.82 MiB / 902.99 MiB  67.42% 33.43 MiB    > preloaded-images-k8s-v8-v1....: 616.66 MiB / 902.99 MiB  68.29% 33.43 MiB    > preloaded-images-k8s-v8-v1....: 624.66 MiB / 902.99 MiB  69.18% 33.80 MiB    > preloaded-images-k8s-v8-v1....: 631.55 MiB / 902.99 MiB  69.94% 33.80 MiB    > preloaded-images-k8s-v8-v1....: 639.22 MiB / 902.99 MiB  70.79% 33.80 MiB    > preloaded-images-k8s-v8-v1....: 646.57 MiB / 902.99 MiB  71.60% 33.98 MiB    > preloaded-images-k8s-v8-v1....: 655.04 MiB / 902.99 MiB  72.54% 33.98 MiB    > preloaded-images-k8s-v8-v1....: 660.66 MiB / 902.99 MiB  73.16% 33.98 MiB    > preloaded-images-k8s-v8-v1....: 668.40 MiB
/ 902.99 MiB  74.02% 34.13 MiB    > preloaded-images-k8s-v8-v1....: 674.10 MiB / 902.99 MiB  74.65% 34.13 MiB    > preloaded-images-k8s-v8-v1....: 681.78 MiB / 902.99 MiB  75.50% 34.13 MiB    > preloaded-images-k8s-v8-v1....: 688.96 MiB / 902.99 MiB  76.30% 34.14 MiB    > preloaded-images-k8s-v8-v1....: 694.10 MiB / 902.99 MiB  76.87% 34.14 MiB    > preloaded-images-k8s-v8-v1....: 702.08 MiB / 902.99 MiB  77.75% 34.14 MiB    > preloaded-images-k8s-v8-v1....: 709.80 MiB / 902.99 MiB  78.61% 34.18 MiB    > preloaded-images-k8s-v8-v1....: 717.17 MiB / 902.99 MiB  79.42% 34.18 MiB    > preloaded-images-k8s-v8-v1....: 724.81 MiB / 902.99 MiB  80.27% 34.18 MiB    > preloaded-images-k8s-v8-v1....: 732.49 MiB / 902.99 MiB  81.12% 34.42 MiB    > preloaded-images-k8s-v8-v1....: 739.79 MiB / 902.99 MiB  81.93% 34.42 MiB    > preloaded-images-k8s-v8-v1....: 747.59 MiB / 902.99 MiB  82.79% 34.42 MiB    > preloaded-images-k8s-v8-v1....: 755.37 MiB / 902.99 MiB  83.65% 34.66 MiB    > preloaded-images-k8s-v8-v1....: 761.78 M
iB / 902.99 MiB  84.36% 34.66 MiB    > preloaded-images-k8s-v8-v1....: 769.83 MiB / 902.99 MiB  85.25% 34.66 MiB    > preloaded-images-k8s-v8-v1....: 777.41 MiB / 902.99 MiB  86.09% 34.79 MiB    > preloaded-images-k8s-v8-v1....: 785.21 MiB / 902.99 MiB  86.96% 34.79 MiB    > preloaded-images-k8s-v8-v1....: 792.52 MiB / 902.99 MiB  87.77% 34.79 MiB    > preloaded-images-k8s-v8-v1....: 798.32 MiB / 902.99 MiB  88.41% 34.79 MiB    > preloaded-images-k8s-v8-v1....: 805.98 MiB / 902.99 MiB  89.26% 34.79 MiB    > preloaded-images-k8s-v8-v1....: 813.99 MiB / 902.99 MiB  90.14% 34.79 MiB    > preloaded-images-k8s-v8-v1....: 821.72 MiB / 902.99 MiB  91.00% 35.06 MiB    > preloaded-images-k8s-v8-v1....: 828.98 MiB / 902.99 MiB  91.80% 35.06 MiB    > preloaded-images-k8s-v8-v1....: 835.67 MiB / 902.99 MiB  92.55% 35.06 MiB    > preloaded-images-k8s-v8-v1....: 843.59 MiB / 902.99 MiB  93.42% 35.15 MiB    > preloaded-images-k8s-v8-v1....: 851.70 MiB / 902.99 MiB  94.32% 35.15 MiB    > preloaded-images-k8s-v8-v1....: 859.0
6 MiB / 902.99 MiB  95.14% 35.15 MiB    > preloaded-images-k8s-v8-v1....: 867.52 MiB / 902.99 MiB  96.07% 35.46 MiB    > preloaded-images-k8s-v8-v1....: 875.24 MiB / 902.99 MiB  96.93% 35.46 MiB    > preloaded-images-k8s-v8-v1....: 882.69 MiB / 902.99 MiB  97.75% 35.46 MiB    > preloaded-images-k8s-v8-v1....: 891.43 MiB / 902.99 MiB  98.72% 35.74 MiB    > preloaded-images-k8s-v8-v1....: 898.16 MiB / 902.99 MiB  99.47% 35.74 MiB    > preloaded-images-k8s-v8-v1....: 902.99 MiB / 902.99 MiB  100.00% 37.16 MiE0114 11:25:07.453536   38029 logs.go:203] Failed to list containers for "kube-apiserver": crictl list: sudo crictl ps -a --quiet --name=kube-apiserver: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-01-14T11:25:07Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0114 11:25:09.474122   38029 logs.go:203] Failed to list containers for "etcd": crictl list: sudo crictl ps -a --quiet --name=etcd: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-01-14T11:25:09Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0114 11:25:11.486032   38029 logs.go:203] Failed to list containers for "coredns": crictl list: sudo crictl ps -a --quiet --name=coredns: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-01-14T11:25:11Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0114 11:25:13.500550   38029 logs.go:203] Failed to list containers for "kube-scheduler": crictl list: sudo crictl ps -a --quiet --name=kube-scheduler: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-01-14T11:25:13Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0114 11:25:15.521645   38029 logs.go:203] Failed to list containers for "kube-proxy": crictl list: sudo crictl ps -a --quiet --name=kube-proxy: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-01-14T11:25:15Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0114 11:25:17.540034   38029 logs.go:203] Failed to list containers for "kubernetes-dashboard": crictl list: sudo crictl ps -a --quiet --name=kubernetes-dashboard: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-01-14T11:25:17Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0114 11:25:19.557632   38029 logs.go:203] Failed to list containers for "storage-provisioner": crictl list: sudo crictl ps -a --quiet --name=storage-provisioner: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-01-14T11:25:19Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0114 11:25:21.570453   38029 logs.go:203] Failed to list containers for "kube-controller-manager": crictl list: sudo crictl ps -a --quiet --name=kube-controller-manager: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-01-14T11:25:21Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0114 11:25:28.933926   38029 logs.go:203] Failed to list containers for "kube-apiserver": crictl list: sudo crictl ps -a --quiet --name=kube-apiserver: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-01-14T11:25:28Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0114 11:25:30.954720   38029 logs.go:203] Failed to list containers for "etcd": crictl list: sudo crictl ps -a --quiet --name=etcd: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-01-14T11:25:30Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0114 11:25:32.967095   38029 logs.go:203] Failed to list containers for "coredns": crictl list: sudo crictl ps -a --quiet --name=coredns: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-01-14T11:25:32Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0114 11:25:34.989725   38029 logs.go:203] Failed to list containers for "kube-scheduler": crictl list: sudo crictl ps -a --quiet --name=kube-scheduler: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-01-14T11:25:34Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0114 11:25:37.012032   38029 logs.go:203] Failed to list containers for "kube-proxy": crictl list: sudo crictl ps -a --quiet --name=kube-proxy: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-01-14T11:25:36Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0114 11:25:39.032245   38029 logs.go:203] Failed to list containers for "kubernetes-dashboard": crictl list: sudo crictl ps -a --quiet --name=kubernetes-dashboard: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-01-14T11:25:39Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0114 11:25:41.044653   38029 logs.go:203] Failed to list containers for "storage-provisioner": crictl list: sudo crictl ps -a --quiet --name=storage-provisioner: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-01-14T11:25:41Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0114 11:25:43.062227   38029 logs.go:203] Failed to list containers for "kube-controller-manager": crictl list: sudo crictl ps -a --quiet --name=kube-controller-manager: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-01-14T11:25:43Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0114 11:25:50.427600   38029 logs.go:203] Failed to list containers for "kube-apiserver": crictl list: sudo crictl ps -a --quiet --name=kube-apiserver: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-01-14T11:25:50Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0114 11:25:52.439429   38029 logs.go:203] Failed to list containers for "etcd": crictl list: sudo crictl ps -a --quiet --name=etcd: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-01-14T11:25:52Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0114 11:25:54.463060   38029 logs.go:203] Failed to list containers for "coredns": crictl list: sudo crictl ps -a --quiet --name=coredns: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-01-14T11:25:54Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0114 11:25:56.491787   38029 logs.go:203] Failed to list containers for "kube-scheduler": crictl list: sudo crictl ps -a --quiet --name=kube-scheduler: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-01-14T11:25:56Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0114 11:25:58.507391   38029 logs.go:203] Failed to list containers for "kube-proxy": crictl list: sudo crictl ps -a --quiet --name=kube-proxy: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-01-14T11:25:58Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0114 11:26:00.538227   38029 logs.go:203] Failed to list containers for "kubernetes-dashboard": crictl list: sudo crictl ps -a --quiet --name=kubernetes-dashboard: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-01-14T11:26:00Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0114 11:26:02.563524   38029 logs.go:203] Failed to list containers for "storage-provisioner": crictl list: sudo crictl ps -a --quiet --name=storage-provisioner: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-01-14T11:26:02Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0114 11:26:04.588594   38029 logs.go:203] Failed to list containers for "kube-controller-manager": crictl list: sudo crictl ps -a --quiet --name=kube-controller-manager: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-01-14T11:26:04Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0114 11:26:11.938607   38029 logs.go:203] Failed to list containers for "kube-apiserver": crictl list: sudo crictl ps -a --quiet --name=kube-apiserver: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-01-14T11:26:11Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0114 11:26:13.960617   38029 logs.go:203] Failed to list containers for "etcd": crictl list: sudo crictl ps -a --quiet --name=etcd: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-01-14T11:26:13Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0114 11:26:15.981079   38029 logs.go:203] Failed to list containers for "coredns": crictl list: sudo crictl ps -a --quiet --name=coredns: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-01-14T11:26:15Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0114 11:26:18.000880   38029 logs.go:203] Failed to list containers for "kube-scheduler": crictl list: sudo crictl ps -a --quiet --name=kube-scheduler: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-01-14T11:26:17Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0114 11:26:20.022782   38029 logs.go:203] Failed to list containers for "kube-proxy": crictl list: sudo crictl ps -a --quiet --name=kube-proxy: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-01-14T11:26:20Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0114 11:26:22.035620   38029 logs.go:203] Failed to list containers for "kubernetes-dashboard": crictl list: sudo crictl ps -a --quiet --name=kubernetes-dashboard: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-01-14T11:26:22Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0114 11:26:24.049994   38029 logs.go:203] Failed to list containers for "storage-provisioner": crictl list: sudo crictl ps -a --quiet --name=storage-provisioner: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-01-14T11:26:24Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0114 11:26:26.065649   38029 logs.go:203] Failed to list containers for "kube-controller-manager": crictl list: sudo crictl ps -a --quiet --name=kube-controller-manager: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-01-14T11:26:26Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0114 11:26:33.433428   38029 logs.go:203] Failed to list containers for "kube-apiserver": crictl list: sudo crictl ps -a --quiet --name=kube-apiserver: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-01-14T11:26:33Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0114 11:26:35.454113   38029 logs.go:203] Failed to list containers for "etcd": crictl list: sudo crictl ps -a --quiet --name=etcd: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-01-14T11:26:35Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0114 11:26:37.470403   38029 logs.go:203] Failed to list containers for "coredns": crictl list: sudo crictl ps -a --quiet --name=coredns: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-01-14T11:26:37Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0114 11:26:39.493319   38029 logs.go:203] Failed to list containers for "kube-scheduler": crictl list: sudo crictl ps -a --quiet --name=kube-scheduler: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-01-14T11:26:39Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0114 11:26:41.520292   38029 logs.go:203] Failed to list containers for "kube-proxy": crictl list: sudo crictl ps -a --quiet --name=kube-proxy: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-01-14T11:26:41Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0114 11:26:43.540284   38029 logs.go:203] Failed to list containers for "kubernetes-dashboard": crictl list: sudo crictl ps -a --quiet --name=kubernetes-dashboard: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-01-14T11:26:43Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0114 11:26:45.555920   38029 logs.go:203] Failed to list containers for "storage-provisioner": crictl list: sudo crictl ps -a --quiet --name=storage-provisioner: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-01-14T11:26:45Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0114 11:26:47.572911   38029 logs.go:203] Failed to list containers for "kube-controller-manager": crictl list: sudo crictl ps -a --quiet --name=kube-controller-manager: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-01-14T11:26:47Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0114 11:26:54.928412   38029 logs.go:203] Failed to list containers for "kube-apiserver": crictl list: sudo crictl ps -a --quiet --name=kube-apiserver: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-01-14T11:26:54Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0114 11:26:56.958006   38029 logs.go:203] Failed to list containers for "etcd": crictl list: sudo crictl ps -a --quiet --name=etcd: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-01-14T11:26:56Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0114 11:26:58.981733   38029 logs.go:203] Failed to list containers for "coredns": crictl list: sudo crictl ps -a --quiet --name=coredns: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-01-14T11:26:58Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0114 11:27:00.999675   38029 logs.go:203] Failed to list containers for "kube-scheduler": crictl list: sudo crictl ps -a --quiet --name=kube-scheduler: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-01-14T11:27:00Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0114 11:27:03.024630   38029 logs.go:203] Failed to list containers for "kube-proxy": crictl list: sudo crictl ps -a --quiet --name=kube-proxy: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-01-14T11:27:03Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0114 11:27:05.041198   38029 logs.go:203] Failed to list containers for "kubernetes-dashboard": crictl list: sudo crictl ps -a --quiet --name=kubernetes-dashboard: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-01-14T11:27:05Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0114 11:27:07.055280   38029 logs.go:203] Failed to list containers for "storage-provisioner": crictl list: sudo crictl ps -a --quiet --name=storage-provisioner: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-01-14T11:27:07Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0114 11:27:09.067175   38029 logs.go:203] Failed to list containers for "kube-controller-manager": crictl list: sudo crictl ps -a --quiet --name=kube-controller-manager: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-01-14T11:27:09Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0114 11:27:16.428052   38029 logs.go:203] Failed to list containers for "kube-apiserver": crictl list: sudo crictl ps -a --quiet --name=kube-apiserver: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-01-14T11:27:16Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0114 11:27:18.443302   38029 logs.go:203] Failed to list containers for "etcd": crictl list: sudo crictl ps -a --quiet --name=etcd: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-01-14T11:27:18Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0114 11:27:20.455065   38029 logs.go:203] Failed to list containers for "coredns": crictl list: sudo crictl ps -a --quiet --name=coredns: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-01-14T11:27:20Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0114 11:27:22.470729   38029 logs.go:203] Failed to list containers for "kube-scheduler": crictl list: sudo crictl ps -a --quiet --name=kube-scheduler: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-01-14T11:27:22Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0114 11:27:24.487407   38029 logs.go:203] Failed to list containers for "kube-proxy": crictl list: sudo crictl ps -a --quiet --name=kube-proxy: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-01-14T11:27:24Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0114 11:27:26.500801   38029 logs.go:203] Failed to list containers for "kubernetes-dashboard": crictl list: sudo crictl ps -a --quiet --name=kubernetes-dashboard: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-01-14T11:27:26Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0114 11:27:28.518214   38029 logs.go:203] Failed to list containers for "storage-provisioner": crictl list: sudo crictl ps -a --quiet --name=storage-provisioner: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-01-14T11:27:28Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0114 11:27:30.538237   38029 logs.go:203] Failed to list containers for "kube-controller-manager": crictl list: sudo crictl ps -a --quiet --name=kube-controller-manager: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-01-14T11:27:30Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0114 11:27:37.451713   38029 logs.go:203] Failed to list containers for "kube-apiserver": crictl list: sudo crictl ps -a --quiet --name=kube-apiserver: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-01-14T11:27:37Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0114 11:27:39.467461   38029 logs.go:203] Failed to list containers for "etcd": crictl list: sudo crictl ps -a --quiet --name=etcd: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-01-14T11:27:39Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0114 11:27:41.485282   38029 logs.go:203] Failed to list containers for "coredns": crictl list: sudo crictl ps -a --quiet --name=coredns: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-01-14T11:27:41Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0114 11:27:43.506236   38029 logs.go:203] Failed to list containers for "kube-scheduler": crictl list: sudo crictl ps -a --quiet --name=kube-scheduler: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-01-14T11:27:43Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0114 11:27:45.520684   38029 logs.go:203] Failed to list containers for "kube-proxy": crictl list: sudo crictl ps -a --quiet --name=kube-proxy: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-01-14T11:27:45Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0114 11:27:47.534096   38029 logs.go:203] Failed to list containers for "kubernetes-dashboard": crictl list: sudo crictl ps -a --quiet --name=kubernetes-dashboard: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-01-14T11:27:47Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0114 11:27:49.547462   38029 logs.go:203] Failed to list containers for "storage-provisioner": crictl list: sudo crictl ps -a --quiet --name=storage-provisioner: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-01-14T11:27:49Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0114 11:27:51.567291   38029 logs.go:203] Failed to list containers for "kube-controller-manager": crictl list: sudo crictl ps -a --quiet --name=kube-controller-manager: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-01-14T11:27:51Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0114 11:27:58.925329   38029 logs.go:203] Failed to list containers for "kube-apiserver": crictl list: sudo crictl ps -a --quiet --name=kube-apiserver: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-01-14T11:27:58Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0114 11:28:00.937211   38029 logs.go:203] Failed to list containers for "etcd": crictl list: sudo crictl ps -a --quiet --name=etcd: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-01-14T11:28:00Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0114 11:28:02.951439   38029 logs.go:203] Failed to list containers for "coredns": crictl list: sudo crictl ps -a --quiet --name=coredns: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-01-14T11:28:02Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0114 11:28:04.964082   38029 logs.go:203] Failed to list containers for "kube-scheduler": crictl list: sudo crictl ps -a --quiet --name=kube-scheduler: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-01-14T11:28:04Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0114 11:28:06.976178   38029 logs.go:203] Failed to list containers for "kube-proxy": crictl list: sudo crictl ps -a --quiet --name=kube-proxy: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-01-14T11:28:06Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0114 11:28:08.997266   38029 logs.go:203] Failed to list containers for "kubernetes-dashboard": crictl list: sudo crictl ps -a --quiet --name=kubernetes-dashboard: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-01-14T11:28:08Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0114 11:28:11.010877   38029 logs.go:203] Failed to list containers for "storage-provisioner": crictl list: sudo crictl ps -a --quiet --name=storage-provisioner: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-01-14T11:28:10Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0114 11:28:13.027537   38029 logs.go:203] Failed to list containers for "kube-controller-manager": crictl list: sudo crictl ps -a --quiet --name=kube-controller-manager: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-01-14T11:28:13Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	! Unable to restart cluster, will reset it: apiserver health: apiserver healthz never reported healthy: cluster wait timed out during healthz check
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	
	stderr:
	
	E0114 11:28:33.353442   38029 logs.go:203] Failed to list containers for "kube-apiserver": crictl list: sudo crictl ps -a --quiet --name=kube-apiserver: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-01-14T11:28:33Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0114 11:28:35.367324   38029 logs.go:203] Failed to list containers for "etcd": crictl list: sudo crictl ps -a --quiet --name=etcd: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-01-14T11:28:35Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0114 11:28:37.386348   38029 logs.go:203] Failed to list containers for "coredns": crictl list: sudo crictl ps -a --quiet --name=coredns: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-01-14T11:28:37Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0114 11:28:39.405637   38029 logs.go:203] Failed to list containers for "kube-scheduler": crictl list: sudo crictl ps -a --quiet --name=kube-scheduler: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-01-14T11:28:39Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0114 11:28:41.422014   38029 logs.go:203] Failed to list containers for "kube-proxy": crictl list: sudo crictl ps -a --quiet --name=kube-proxy: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-01-14T11:28:41Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0114 11:28:43.440086   38029 logs.go:203] Failed to list containers for "kubernetes-dashboard": crictl list: sudo crictl ps -a --quiet --name=kubernetes-dashboard: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-01-14T11:28:43Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0114 11:28:45.457490   38029 logs.go:203] Failed to list containers for "storage-provisioner": crictl list: sudo crictl ps -a --quiet --name=storage-provisioner: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-01-14T11:28:45Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0114 11:28:47.473151   38029 logs.go:203] Failed to list containers for "kube-controller-manager": crictl list: sudo crictl ps -a --quiet --name=kube-controller-manager: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-01-14T11:28:47Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	* 
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	
	stderr:
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose
	X Exiting due to GUEST_START: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	
	stderr:
	
	* 
	* If the above advice does not help, please let us know: 
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:133: legacy v1.16.0 start failed: exit status 80
panic.go:522: *** TestRunningBinaryUpgrade FAILED at 2023-01-14 11:28:49.792494109 +0000 UTC m=+4964.657794245
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p running-upgrade-110001 -n running-upgrade-110001
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p running-upgrade-110001 -n running-upgrade-110001: exit status 6 (242.328723ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0114 11:28:50.023879   40382 status.go:415] kubeconfig endpoint: extract IP: "running-upgrade-110001" does not appear in /home/jenkins/minikube-integration/15642-7076/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "running-upgrade-110001" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "running-upgrade-110001" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-110001
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-110001: (1.583633249s)
--- FAIL: TestRunningBinaryUpgrade (1730.14s)
E0114 11:28:52.030276   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/functional-102121/client.crt: no such file or directory

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (254.03s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:190: (dbg) Run:  /tmp/minikube-v1.16.0.1357470183.exe start -p stopped-upgrade-110158 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:190: (dbg) Done: /tmp/minikube-v1.16.0.1357470183.exe start -p stopped-upgrade-110158 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd: (2m10.158650671s)
version_upgrade_test.go:199: (dbg) Run:  /tmp/minikube-v1.16.0.1357470183.exe -p stopped-upgrade-110158 stop
version_upgrade_test.go:199: (dbg) Done: /tmp/minikube-v1.16.0.1357470183.exe -p stopped-upgrade-110158 stop: (3.126655689s)
version_upgrade_test.go:205: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-110158 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
E0114 11:04:28.384964   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/addons-100659/client.crt: no such file or directory
E0114 11:04:39.183035   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/ingress-addon-legacy-102444/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p stopped-upgrade-110158 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: exit status 90 (2m0.734468017s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-110158] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15642
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15642-7076/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15642-7076/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Kubernetes 1.25.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.25.3
	* Using the kvm2 driver based on existing profile
	* Starting control plane node stopped-upgrade-110158 in cluster stopped-upgrade-110158
	* Downloading Kubernetes v1.20.0 preload ...
	* Restarting existing kvm2 VM for "stopped-upgrade-110158" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0114 11:04:14.794214   31090 out.go:296] Setting OutFile to fd 1 ...
	I0114 11:04:14.794464   31090 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0114 11:04:14.794479   31090 out.go:309] Setting ErrFile to fd 2...
	I0114 11:04:14.794493   31090 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0114 11:04:14.794677   31090 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15642-7076/.minikube/bin
	I0114 11:04:14.795320   31090 out.go:303] Setting JSON to false
	I0114 11:04:14.796622   31090 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":6402,"bootTime":1673687853,"procs":226,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1027-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0114 11:04:14.796708   31090 start.go:135] virtualization: kvm guest
	I0114 11:04:14.799639   31090 out.go:177] * [stopped-upgrade-110158] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)
	I0114 11:04:14.801413   31090 out.go:177]   - MINIKUBE_LOCATION=15642
	I0114 11:04:14.801270   31090 notify.go:220] Checking for updates...
	I0114 11:04:14.802923   31090 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0114 11:04:14.804339   31090 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/15642-7076/kubeconfig
	I0114 11:04:14.805981   31090 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/15642-7076/.minikube
	I0114 11:04:14.808021   31090 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0114 11:04:14.810119   31090 config.go:180] Loaded profile config "stopped-upgrade-110158": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0114 11:04:14.814302   31090 main.go:134] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0114 11:04:14.814376   31090 main.go:134] libmachine: Launching plugin server for driver kvm2
	I0114 11:04:14.835041   31090 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:45021
	I0114 11:04:14.835502   31090 main.go:134] libmachine: () Calling .GetVersion
	I0114 11:04:14.836215   31090 main.go:134] libmachine: Using API Version  1
	I0114 11:04:14.836243   31090 main.go:134] libmachine: () Calling .SetConfigRaw
	I0114 11:04:14.836648   31090 main.go:134] libmachine: () Calling .GetMachineName
	I0114 11:04:14.836879   31090 main.go:134] libmachine: (stopped-upgrade-110158) Calling .DriverName
	I0114 11:04:14.839398   31090 out.go:177] * Kubernetes 1.25.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.25.3
	I0114 11:04:14.840851   31090 driver.go:365] Setting default libvirt URI to qemu:///system
	I0114 11:04:14.841190   31090 main.go:134] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0114 11:04:14.841223   31090 main.go:134] libmachine: Launching plugin server for driver kvm2
	I0114 11:04:14.858758   31090 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:39285
	I0114 11:04:14.859177   31090 main.go:134] libmachine: () Calling .GetVersion
	I0114 11:04:14.859773   31090 main.go:134] libmachine: Using API Version  1
	I0114 11:04:14.859801   31090 main.go:134] libmachine: () Calling .SetConfigRaw
	I0114 11:04:14.860110   31090 main.go:134] libmachine: () Calling .GetMachineName
	I0114 11:04:14.860260   31090 main.go:134] libmachine: (stopped-upgrade-110158) Calling .DriverName
	I0114 11:04:14.898857   31090 out.go:177] * Using the kvm2 driver based on existing profile
	I0114 11:04:14.900625   31090 start.go:294] selected driver: kvm2
	I0114 11:04:14.900653   31090 start.go:838] validating driver "kvm2" against &{Name:stopped-upgrade-110158 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.16.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.15-snapshot4@sha256:ef1f485b5a1cfa4c989bc05e153f0a8525968ec999e242efff871cbb31649c16 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:stopped-up
grade-110158 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.206 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0114 11:04:14.900786   31090 start.go:849] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0114 11:04:14.901525   31090 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0114 11:04:14.901691   31090 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/15642-7076/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0114 11:04:14.918522   31090 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.28.0
	I0114 11:04:14.918848   31090 cni.go:95] Creating CNI manager for ""
	I0114 11:04:14.918865   31090 cni.go:165] "kvm2" driver + containerd runtime found, recommending bridge
	I0114 11:04:14.918876   31090 start_flags.go:319] config:
	{Name:stopped-upgrade-110158 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.16.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.15-snapshot4@sha256:ef1f485b5a1cfa4c989bc05e153f0a8525968ec999e242efff871cbb31649c16 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:stopped-upgrade-110158 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.206 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stati
cIP:}
	I0114 11:04:14.919000   31090 iso.go:125] acquiring lock: {Name:mk2d30b3fe95e944ec3a455ef50a6daa83b559c3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0114 11:04:14.921965   31090 out.go:177] * Starting control plane node stopped-upgrade-110158 in cluster stopped-upgrade-110158
	I0114 11:04:14.923337   31090 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0114 11:04:15.373853   31090 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4
	I0114 11:04:15.373896   31090 cache.go:57] Caching tarball of preloaded images
	I0114 11:04:15.374134   31090 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0114 11:04:15.376707   31090 out.go:177] * Downloading Kubernetes v1.20.0 preload ...
	I0114 11:04:15.378354   31090 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4 ...
	I0114 11:04:15.930477   31090 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4?checksum=md5:c28dc5b6f01e4b826afa7afc8a0fd1fd -> /home/jenkins/minikube-integration/15642-7076/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4
	I0114 11:04:33.157530   31090 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4 ...
	I0114 11:04:33.157649   31090 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/15642-7076/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4 ...
	I0114 11:04:34.156105   31090 cache.go:60] Finished verifying existence of preloaded tar for  v1.20.0 on containerd
	I0114 11:04:34.156276   31090 profile.go:148] Saving config to /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/stopped-upgrade-110158/config.json ...
	I0114 11:04:34.158913   31090 cache.go:193] Successfully downloaded all kic artifacts
	I0114 11:04:34.158965   31090 start.go:364] acquiring machines lock for stopped-upgrade-110158: {Name:mk0b2fd58874b04199a2e55d480667572854a1a5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0114 11:04:51.192444   31090 start.go:368] acquired machines lock for "stopped-upgrade-110158" in 17.033448526s
	I0114 11:04:51.192489   31090 start.go:96] Skipping create...Using existing machine configuration
	I0114 11:04:51.192507   31090 fix.go:55] fixHost starting: 
	I0114 11:04:51.192960   31090 main.go:134] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0114 11:04:51.193007   31090 main.go:134] libmachine: Launching plugin server for driver kvm2
	I0114 11:04:51.213449   31090 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:38663
	I0114 11:04:51.213847   31090 main.go:134] libmachine: () Calling .GetVersion
	I0114 11:04:51.214383   31090 main.go:134] libmachine: Using API Version  1
	I0114 11:04:51.214397   31090 main.go:134] libmachine: () Calling .SetConfigRaw
	I0114 11:04:51.214769   31090 main.go:134] libmachine: () Calling .GetMachineName
	I0114 11:04:51.214965   31090 main.go:134] libmachine: (stopped-upgrade-110158) Calling .DriverName
	I0114 11:04:51.215116   31090 main.go:134] libmachine: (stopped-upgrade-110158) Calling .GetState
	I0114 11:04:51.217106   31090 fix.go:103] recreateIfNeeded on stopped-upgrade-110158: state=Stopped err=<nil>
	I0114 11:04:51.217148   31090 main.go:134] libmachine: (stopped-upgrade-110158) Calling .DriverName
	W0114 11:04:51.217287   31090 fix.go:129] unexpected machine state, will restart: <nil>
	I0114 11:04:51.316335   31090 out.go:177] * Restarting existing kvm2 VM for "stopped-upgrade-110158" ...
	I0114 11:04:51.358009   31090 main.go:134] libmachine: (stopped-upgrade-110158) Calling .Start
	I0114 11:04:51.358973   31090 main.go:134] libmachine: (stopped-upgrade-110158) Ensuring networks are active...
	I0114 11:04:51.360208   31090 main.go:134] libmachine: (stopped-upgrade-110158) Ensuring network default is active
	I0114 11:04:51.360670   31090 main.go:134] libmachine: (stopped-upgrade-110158) Ensuring network minikube-net is active
	I0114 11:04:51.361166   31090 main.go:134] libmachine: (stopped-upgrade-110158) Getting domain xml...
	I0114 11:04:51.362105   31090 main.go:134] libmachine: (stopped-upgrade-110158) Creating domain...
	I0114 11:04:53.048498   31090 main.go:134] libmachine: (stopped-upgrade-110158) Waiting to get IP...
	I0114 11:04:53.049563   31090 main.go:134] libmachine: (stopped-upgrade-110158) DBG | domain stopped-upgrade-110158 has defined MAC address 52:54:00:3a:62:f4 in network minikube-net
	I0114 11:04:53.050105   31090 main.go:134] libmachine: (stopped-upgrade-110158) Found IP for machine: 192.168.72.206
	I0114 11:04:53.050128   31090 main.go:134] libmachine: (stopped-upgrade-110158) Reserving static IP address...
	I0114 11:04:53.050147   31090 main.go:134] libmachine: (stopped-upgrade-110158) DBG | domain stopped-upgrade-110158 has current primary IP address 192.168.72.206 and MAC address 52:54:00:3a:62:f4 in network minikube-net
	I0114 11:04:53.050622   31090 main.go:134] libmachine: (stopped-upgrade-110158) DBG | found host DHCP lease matching {name: "stopped-upgrade-110158", mac: "52:54:00:3a:62:f4", ip: "192.168.72.206"} in network minikube-net: {Iface:virbr4 ExpiryTime:2023-01-14 12:02:54 +0000 UTC Type:0 Mac:52:54:00:3a:62:f4 Iaid: IPaddr:192.168.72.206 Prefix:24 Hostname:stopped-upgrade-110158 Clientid:01:52:54:00:3a:62:f4}
	I0114 11:04:53.050651   31090 main.go:134] libmachine: (stopped-upgrade-110158) Reserved static IP address: 192.168.72.206
	I0114 11:04:53.050669   31090 main.go:134] libmachine: (stopped-upgrade-110158) DBG | skip adding static IP to network minikube-net - found existing host DHCP lease matching {name: "stopped-upgrade-110158", mac: "52:54:00:3a:62:f4", ip: "192.168.72.206"}
	I0114 11:04:53.050686   31090 main.go:134] libmachine: (stopped-upgrade-110158) DBG | Getting to WaitForSSH function...
	I0114 11:04:53.050698   31090 main.go:134] libmachine: (stopped-upgrade-110158) Waiting for SSH to be available...
	I0114 11:04:53.053357   31090 main.go:134] libmachine: (stopped-upgrade-110158) DBG | domain stopped-upgrade-110158 has defined MAC address 52:54:00:3a:62:f4 in network minikube-net
	I0114 11:04:53.053869   31090 main.go:134] libmachine: (stopped-upgrade-110158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:62:f4", ip: ""} in network minikube-net: {Iface:virbr4 ExpiryTime:2023-01-14 12:02:54 +0000 UTC Type:0 Mac:52:54:00:3a:62:f4 Iaid: IPaddr:192.168.72.206 Prefix:24 Hostname:stopped-upgrade-110158 Clientid:01:52:54:00:3a:62:f4}
	I0114 11:04:53.053898   31090 main.go:134] libmachine: (stopped-upgrade-110158) DBG | domain stopped-upgrade-110158 has defined IP address 192.168.72.206 and MAC address 52:54:00:3a:62:f4 in network minikube-net
	I0114 11:04:53.054191   31090 main.go:134] libmachine: (stopped-upgrade-110158) DBG | Using SSH client type: external
	I0114 11:04:53.054221   31090 main.go:134] libmachine: (stopped-upgrade-110158) DBG | Using SSH private key: /home/jenkins/minikube-integration/15642-7076/.minikube/machines/stopped-upgrade-110158/id_rsa (-rw-------)
	I0114 11:04:53.054252   31090 main.go:134] libmachine: (stopped-upgrade-110158) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.206 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/15642-7076/.minikube/machines/stopped-upgrade-110158/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0114 11:04:53.054273   31090 main.go:134] libmachine: (stopped-upgrade-110158) DBG | About to run SSH command:
	I0114 11:04:53.054293   31090 main.go:134] libmachine: (stopped-upgrade-110158) DBG | exit 0
	I0114 11:05:06.234225   31090 main.go:134] libmachine: (stopped-upgrade-110158) DBG | SSH cmd err, output: <nil>: 
	I0114 11:05:06.234658   31090 main.go:134] libmachine: (stopped-upgrade-110158) Calling .GetConfigRaw
	I0114 11:05:06.235346   31090 main.go:134] libmachine: (stopped-upgrade-110158) Calling .GetIP
	I0114 11:05:06.238594   31090 main.go:134] libmachine: (stopped-upgrade-110158) DBG | domain stopped-upgrade-110158 has defined MAC address 52:54:00:3a:62:f4 in network minikube-net
	I0114 11:05:06.239206   31090 main.go:134] libmachine: (stopped-upgrade-110158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:62:f4", ip: ""} in network minikube-net: {Iface:virbr4 ExpiryTime:2023-01-14 12:05:04 +0000 UTC Type:0 Mac:52:54:00:3a:62:f4 Iaid: IPaddr:192.168.72.206 Prefix:24 Hostname:stopped-upgrade-110158 Clientid:01:52:54:00:3a:62:f4}
	I0114 11:05:06.239231   31090 main.go:134] libmachine: (stopped-upgrade-110158) DBG | domain stopped-upgrade-110158 has defined IP address 192.168.72.206 and MAC address 52:54:00:3a:62:f4 in network minikube-net
	I0114 11:05:06.239650   31090 profile.go:148] Saving config to /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/stopped-upgrade-110158/config.json ...
	I0114 11:05:06.239901   31090 machine.go:88] provisioning docker machine ...
	I0114 11:05:06.239934   31090 main.go:134] libmachine: (stopped-upgrade-110158) Calling .DriverName
	I0114 11:05:06.240149   31090 main.go:134] libmachine: (stopped-upgrade-110158) Calling .GetMachineName
	I0114 11:05:06.240346   31090 buildroot.go:166] provisioning hostname "stopped-upgrade-110158"
	I0114 11:05:06.240373   31090 main.go:134] libmachine: (stopped-upgrade-110158) Calling .GetMachineName
	I0114 11:05:06.240552   31090 main.go:134] libmachine: (stopped-upgrade-110158) Calling .GetSSHHostname
	I0114 11:05:06.243522   31090 main.go:134] libmachine: (stopped-upgrade-110158) DBG | domain stopped-upgrade-110158 has defined MAC address 52:54:00:3a:62:f4 in network minikube-net
	I0114 11:05:06.243976   31090 main.go:134] libmachine: (stopped-upgrade-110158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:62:f4", ip: ""} in network minikube-net: {Iface:virbr4 ExpiryTime:2023-01-14 12:05:04 +0000 UTC Type:0 Mac:52:54:00:3a:62:f4 Iaid: IPaddr:192.168.72.206 Prefix:24 Hostname:stopped-upgrade-110158 Clientid:01:52:54:00:3a:62:f4}
	I0114 11:05:06.244002   31090 main.go:134] libmachine: (stopped-upgrade-110158) DBG | domain stopped-upgrade-110158 has defined IP address 192.168.72.206 and MAC address 52:54:00:3a:62:f4 in network minikube-net
	I0114 11:05:06.244150   31090 main.go:134] libmachine: (stopped-upgrade-110158) Calling .GetSSHPort
	I0114 11:05:06.244370   31090 main.go:134] libmachine: (stopped-upgrade-110158) Calling .GetSSHKeyPath
	I0114 11:05:06.244534   31090 main.go:134] libmachine: (stopped-upgrade-110158) Calling .GetSSHKeyPath
	I0114 11:05:06.244694   31090 main.go:134] libmachine: (stopped-upgrade-110158) Calling .GetSSHUsername
	I0114 11:05:06.244883   31090 main.go:134] libmachine: Using SSH client type: native
	I0114 11:05:06.245079   31090 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1fc0] 0x7f5140 <nil>  [] 0s} 192.168.72.206 22 <nil> <nil>}
	I0114 11:05:06.245113   31090 main.go:134] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-110158 && echo "stopped-upgrade-110158" | sudo tee /etc/hostname
	I0114 11:05:06.394646   31090 main.go:134] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-110158
	
	I0114 11:05:06.394749   31090 main.go:134] libmachine: (stopped-upgrade-110158) Calling .GetSSHHostname
	I0114 11:05:06.398116   31090 main.go:134] libmachine: (stopped-upgrade-110158) DBG | domain stopped-upgrade-110158 has defined MAC address 52:54:00:3a:62:f4 in network minikube-net
	I0114 11:05:06.398555   31090 main.go:134] libmachine: (stopped-upgrade-110158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:62:f4", ip: ""} in network minikube-net: {Iface:virbr4 ExpiryTime:2023-01-14 12:05:04 +0000 UTC Type:0 Mac:52:54:00:3a:62:f4 Iaid: IPaddr:192.168.72.206 Prefix:24 Hostname:stopped-upgrade-110158 Clientid:01:52:54:00:3a:62:f4}
	I0114 11:05:06.398628   31090 main.go:134] libmachine: (stopped-upgrade-110158) DBG | domain stopped-upgrade-110158 has defined IP address 192.168.72.206 and MAC address 52:54:00:3a:62:f4 in network minikube-net
	I0114 11:05:06.398717   31090 main.go:134] libmachine: (stopped-upgrade-110158) Calling .GetSSHPort
	I0114 11:05:06.399066   31090 main.go:134] libmachine: (stopped-upgrade-110158) Calling .GetSSHKeyPath
	I0114 11:05:06.399281   31090 main.go:134] libmachine: (stopped-upgrade-110158) Calling .GetSSHKeyPath
	I0114 11:05:06.399452   31090 main.go:134] libmachine: (stopped-upgrade-110158) Calling .GetSSHUsername
	I0114 11:05:06.399679   31090 main.go:134] libmachine: Using SSH client type: native
	I0114 11:05:06.399846   31090 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1fc0] 0x7f5140 <nil>  [] 0s} 192.168.72.206 22 <nil> <nil>}
	I0114 11:05:06.399872   31090 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-110158' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-110158/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-110158' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0114 11:05:06.541503   31090 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0114 11:05:06.541533   31090 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/15642-7076/.minikube CaCertPath:/home/jenkins/minikube-integration/15642-7076/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/15642-7076/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/15642-7076/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/15642-7076/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/15642-7076/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/15642-7076/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/15642-7076/.minikube}
	I0114 11:05:06.541582   31090 buildroot.go:174] setting up certificates
	I0114 11:05:06.541592   31090 provision.go:83] configureAuth start
	I0114 11:05:06.541605   31090 main.go:134] libmachine: (stopped-upgrade-110158) Calling .GetMachineName
	I0114 11:05:06.541995   31090 main.go:134] libmachine: (stopped-upgrade-110158) Calling .GetIP
	I0114 11:05:06.545310   31090 main.go:134] libmachine: (stopped-upgrade-110158) DBG | domain stopped-upgrade-110158 has defined MAC address 52:54:00:3a:62:f4 in network minikube-net
	I0114 11:05:06.545682   31090 main.go:134] libmachine: (stopped-upgrade-110158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:62:f4", ip: ""} in network minikube-net: {Iface:virbr4 ExpiryTime:2023-01-14 12:05:04 +0000 UTC Type:0 Mac:52:54:00:3a:62:f4 Iaid: IPaddr:192.168.72.206 Prefix:24 Hostname:stopped-upgrade-110158 Clientid:01:52:54:00:3a:62:f4}
	I0114 11:05:06.545719   31090 main.go:134] libmachine: (stopped-upgrade-110158) DBG | domain stopped-upgrade-110158 has defined IP address 192.168.72.206 and MAC address 52:54:00:3a:62:f4 in network minikube-net
	I0114 11:05:06.545973   31090 main.go:134] libmachine: (stopped-upgrade-110158) Calling .GetSSHHostname
	I0114 11:05:06.548663   31090 main.go:134] libmachine: (stopped-upgrade-110158) DBG | domain stopped-upgrade-110158 has defined MAC address 52:54:00:3a:62:f4 in network minikube-net
	I0114 11:05:06.549065   31090 main.go:134] libmachine: (stopped-upgrade-110158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:62:f4", ip: ""} in network minikube-net: {Iface:virbr4 ExpiryTime:2023-01-14 12:05:04 +0000 UTC Type:0 Mac:52:54:00:3a:62:f4 Iaid: IPaddr:192.168.72.206 Prefix:24 Hostname:stopped-upgrade-110158 Clientid:01:52:54:00:3a:62:f4}
	I0114 11:05:06.549096   31090 main.go:134] libmachine: (stopped-upgrade-110158) DBG | domain stopped-upgrade-110158 has defined IP address 192.168.72.206 and MAC address 52:54:00:3a:62:f4 in network minikube-net
	I0114 11:05:06.549263   31090 provision.go:138] copyHostCerts
	I0114 11:05:06.549315   31090 exec_runner.go:144] found /home/jenkins/minikube-integration/15642-7076/.minikube/ca.pem, removing ...
	I0114 11:05:06.549333   31090 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15642-7076/.minikube/ca.pem
	I0114 11:05:06.549393   31090 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15642-7076/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/15642-7076/.minikube/ca.pem (1078 bytes)
	I0114 11:05:06.549466   31090 exec_runner.go:144] found /home/jenkins/minikube-integration/15642-7076/.minikube/cert.pem, removing ...
	I0114 11:05:06.549478   31090 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15642-7076/.minikube/cert.pem
	I0114 11:05:06.549506   31090 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15642-7076/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/15642-7076/.minikube/cert.pem (1123 bytes)
	I0114 11:05:06.549596   31090 exec_runner.go:144] found /home/jenkins/minikube-integration/15642-7076/.minikube/key.pem, removing ...
	I0114 11:05:06.549605   31090 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15642-7076/.minikube/key.pem
	I0114 11:05:06.549626   31090 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15642-7076/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/15642-7076/.minikube/key.pem (1679 bytes)
	I0114 11:05:06.549670   31090 provision.go:112] generating server cert: /home/jenkins/minikube-integration/15642-7076/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/15642-7076/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/15642-7076/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-110158 san=[192.168.72.206 192.168.72.206 localhost 127.0.0.1 minikube stopped-upgrade-110158]
	I0114 11:05:06.669198   31090 provision.go:172] copyRemoteCerts
	I0114 11:05:06.669262   31090 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0114 11:05:06.669290   31090 main.go:134] libmachine: (stopped-upgrade-110158) Calling .GetSSHHostname
	I0114 11:05:06.672529   31090 main.go:134] libmachine: (stopped-upgrade-110158) DBG | domain stopped-upgrade-110158 has defined MAC address 52:54:00:3a:62:f4 in network minikube-net
	I0114 11:05:06.672962   31090 main.go:134] libmachine: (stopped-upgrade-110158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:62:f4", ip: ""} in network minikube-net: {Iface:virbr4 ExpiryTime:2023-01-14 12:05:04 +0000 UTC Type:0 Mac:52:54:00:3a:62:f4 Iaid: IPaddr:192.168.72.206 Prefix:24 Hostname:stopped-upgrade-110158 Clientid:01:52:54:00:3a:62:f4}
	I0114 11:05:06.672994   31090 main.go:134] libmachine: (stopped-upgrade-110158) DBG | domain stopped-upgrade-110158 has defined IP address 192.168.72.206 and MAC address 52:54:00:3a:62:f4 in network minikube-net
	I0114 11:05:06.673334   31090 main.go:134] libmachine: (stopped-upgrade-110158) Calling .GetSSHPort
	I0114 11:05:06.673512   31090 main.go:134] libmachine: (stopped-upgrade-110158) Calling .GetSSHKeyPath
	I0114 11:05:06.673692   31090 main.go:134] libmachine: (stopped-upgrade-110158) Calling .GetSSHUsername
	I0114 11:05:06.673879   31090 sshutil.go:53] new ssh client: &{IP:192.168.72.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15642-7076/.minikube/machines/stopped-upgrade-110158/id_rsa Username:docker}
	I0114 11:05:06.770475   31090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-7076/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0114 11:05:06.789115   31090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-7076/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0114 11:05:06.806715   31090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-7076/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0114 11:05:06.823917   31090 provision.go:86] duration metric: configureAuth took 282.309064ms
	I0114 11:05:06.823966   31090 buildroot.go:189] setting minikube options for container-runtime
	I0114 11:05:06.824175   31090 config.go:180] Loaded profile config "stopped-upgrade-110158": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0114 11:05:06.824189   31090 machine.go:91] provisioned docker machine in 584.272709ms
	I0114 11:05:06.824198   31090 start.go:300] post-start starting for "stopped-upgrade-110158" (driver="kvm2")
	I0114 11:05:06.824208   31090 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0114 11:05:06.824237   31090 main.go:134] libmachine: (stopped-upgrade-110158) Calling .DriverName
	I0114 11:05:06.824571   31090 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0114 11:05:06.824596   31090 main.go:134] libmachine: (stopped-upgrade-110158) Calling .GetSSHHostname
	I0114 11:05:06.827892   31090 main.go:134] libmachine: (stopped-upgrade-110158) DBG | domain stopped-upgrade-110158 has defined MAC address 52:54:00:3a:62:f4 in network minikube-net
	I0114 11:05:06.828349   31090 main.go:134] libmachine: (stopped-upgrade-110158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:62:f4", ip: ""} in network minikube-net: {Iface:virbr4 ExpiryTime:2023-01-14 12:05:04 +0000 UTC Type:0 Mac:52:54:00:3a:62:f4 Iaid: IPaddr:192.168.72.206 Prefix:24 Hostname:stopped-upgrade-110158 Clientid:01:52:54:00:3a:62:f4}
	I0114 11:05:06.828382   31090 main.go:134] libmachine: (stopped-upgrade-110158) DBG | domain stopped-upgrade-110158 has defined IP address 192.168.72.206 and MAC address 52:54:00:3a:62:f4 in network minikube-net
	I0114 11:05:06.828687   31090 main.go:134] libmachine: (stopped-upgrade-110158) Calling .GetSSHPort
	I0114 11:05:06.828896   31090 main.go:134] libmachine: (stopped-upgrade-110158) Calling .GetSSHKeyPath
	I0114 11:05:06.829070   31090 main.go:134] libmachine: (stopped-upgrade-110158) Calling .GetSSHUsername
	I0114 11:05:06.829246   31090 sshutil.go:53] new ssh client: &{IP:192.168.72.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15642-7076/.minikube/machines/stopped-upgrade-110158/id_rsa Username:docker}
	I0114 11:05:06.922752   31090 ssh_runner.go:195] Run: cat /etc/os-release
	I0114 11:05:06.927662   31090 info.go:137] Remote host: Buildroot 2020.02.8
	I0114 11:05:06.927691   31090 filesync.go:126] Scanning /home/jenkins/minikube-integration/15642-7076/.minikube/addons for local assets ...
	I0114 11:05:06.927762   31090 filesync.go:126] Scanning /home/jenkins/minikube-integration/15642-7076/.minikube/files for local assets ...
	I0114 11:05:06.927849   31090 filesync.go:149] local asset: /home/jenkins/minikube-integration/15642-7076/.minikube/files/etc/ssl/certs/139212.pem -> 139212.pem in /etc/ssl/certs
	I0114 11:05:06.927930   31090 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0114 11:05:06.935186   31090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-7076/.minikube/files/etc/ssl/certs/139212.pem --> /etc/ssl/certs/139212.pem (1708 bytes)
	I0114 11:05:06.953369   31090 start.go:303] post-start completed in 129.154833ms
	I0114 11:05:06.953397   31090 fix.go:57] fixHost completed within 15.760890523s
	I0114 11:05:06.953420   31090 main.go:134] libmachine: (stopped-upgrade-110158) Calling .GetSSHHostname
	I0114 11:05:06.956914   31090 main.go:134] libmachine: (stopped-upgrade-110158) DBG | domain stopped-upgrade-110158 has defined MAC address 52:54:00:3a:62:f4 in network minikube-net
	I0114 11:05:06.957469   31090 main.go:134] libmachine: (stopped-upgrade-110158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:62:f4", ip: ""} in network minikube-net: {Iface:virbr4 ExpiryTime:2023-01-14 12:05:04 +0000 UTC Type:0 Mac:52:54:00:3a:62:f4 Iaid: IPaddr:192.168.72.206 Prefix:24 Hostname:stopped-upgrade-110158 Clientid:01:52:54:00:3a:62:f4}
	I0114 11:05:06.957509   31090 main.go:134] libmachine: (stopped-upgrade-110158) DBG | domain stopped-upgrade-110158 has defined IP address 192.168.72.206 and MAC address 52:54:00:3a:62:f4 in network minikube-net
	I0114 11:05:06.957719   31090 main.go:134] libmachine: (stopped-upgrade-110158) Calling .GetSSHPort
	I0114 11:05:06.957940   31090 main.go:134] libmachine: (stopped-upgrade-110158) Calling .GetSSHKeyPath
	I0114 11:05:06.958118   31090 main.go:134] libmachine: (stopped-upgrade-110158) Calling .GetSSHKeyPath
	I0114 11:05:06.958318   31090 main.go:134] libmachine: (stopped-upgrade-110158) Calling .GetSSHUsername
	I0114 11:05:06.958546   31090 main.go:134] libmachine: Using SSH client type: native
	I0114 11:05:06.958725   31090 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f1fc0] 0x7f5140 <nil>  [] 0s} 192.168.72.206 22 <nil> <nil>}
	I0114 11:05:06.958747   31090 main.go:134] libmachine: About to run SSH command:
	date +%s.%N
	I0114 11:05:07.091374   31090 main.go:134] libmachine: SSH cmd err, output: <nil>: 1673694307.028004632
	
	I0114 11:05:07.091398   31090 fix.go:207] guest clock: 1673694307.028004632
	I0114 11:05:07.091408   31090 fix.go:220] Guest: 2023-01-14 11:05:07.028004632 +0000 UTC Remote: 2023-01-14 11:05:06.953401387 +0000 UTC m=+52.239480324 (delta=74.603245ms)
	I0114 11:05:07.091432   31090 fix.go:191] guest clock delta is within tolerance: 74.603245ms
	I0114 11:05:07.091438   31090 start.go:83] releasing machines lock for "stopped-upgrade-110158", held for 15.898967766s
	I0114 11:05:07.091487   31090 main.go:134] libmachine: (stopped-upgrade-110158) Calling .DriverName
	I0114 11:05:07.091811   31090 main.go:134] libmachine: (stopped-upgrade-110158) Calling .GetIP
	I0114 11:05:07.095305   31090 main.go:134] libmachine: (stopped-upgrade-110158) DBG | domain stopped-upgrade-110158 has defined MAC address 52:54:00:3a:62:f4 in network minikube-net
	I0114 11:05:07.095719   31090 main.go:134] libmachine: (stopped-upgrade-110158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:62:f4", ip: ""} in network minikube-net: {Iface:virbr4 ExpiryTime:2023-01-14 12:05:04 +0000 UTC Type:0 Mac:52:54:00:3a:62:f4 Iaid: IPaddr:192.168.72.206 Prefix:24 Hostname:stopped-upgrade-110158 Clientid:01:52:54:00:3a:62:f4}
	I0114 11:05:07.095746   31090 main.go:134] libmachine: (stopped-upgrade-110158) DBG | domain stopped-upgrade-110158 has defined IP address 192.168.72.206 and MAC address 52:54:00:3a:62:f4 in network minikube-net
	I0114 11:05:07.096045   31090 main.go:134] libmachine: (stopped-upgrade-110158) Calling .DriverName
	I0114 11:05:07.096719   31090 main.go:134] libmachine: (stopped-upgrade-110158) Calling .DriverName
	I0114 11:05:07.096899   31090 main.go:134] libmachine: (stopped-upgrade-110158) Calling .DriverName
	I0114 11:05:07.097039   31090 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0114 11:05:07.097078   31090 main.go:134] libmachine: (stopped-upgrade-110158) Calling .GetSSHHostname
	I0114 11:05:07.097125   31090 ssh_runner.go:195] Run: cat /version.json
	I0114 11:05:07.097331   31090 main.go:134] libmachine: (stopped-upgrade-110158) Calling .GetSSHHostname
	I0114 11:05:07.100115   31090 main.go:134] libmachine: (stopped-upgrade-110158) DBG | domain stopped-upgrade-110158 has defined MAC address 52:54:00:3a:62:f4 in network minikube-net
	I0114 11:05:07.100371   31090 main.go:134] libmachine: (stopped-upgrade-110158) DBG | domain stopped-upgrade-110158 has defined MAC address 52:54:00:3a:62:f4 in network minikube-net
	I0114 11:05:07.100563   31090 main.go:134] libmachine: (stopped-upgrade-110158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:62:f4", ip: ""} in network minikube-net: {Iface:virbr4 ExpiryTime:2023-01-14 12:05:04 +0000 UTC Type:0 Mac:52:54:00:3a:62:f4 Iaid: IPaddr:192.168.72.206 Prefix:24 Hostname:stopped-upgrade-110158 Clientid:01:52:54:00:3a:62:f4}
	I0114 11:05:07.100588   31090 main.go:134] libmachine: (stopped-upgrade-110158) DBG | domain stopped-upgrade-110158 has defined IP address 192.168.72.206 and MAC address 52:54:00:3a:62:f4 in network minikube-net
	I0114 11:05:07.100728   31090 main.go:134] libmachine: (stopped-upgrade-110158) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:62:f4", ip: ""} in network minikube-net: {Iface:virbr4 ExpiryTime:2023-01-14 12:05:04 +0000 UTC Type:0 Mac:52:54:00:3a:62:f4 Iaid: IPaddr:192.168.72.206 Prefix:24 Hostname:stopped-upgrade-110158 Clientid:01:52:54:00:3a:62:f4}
	I0114 11:05:07.100777   31090 main.go:134] libmachine: (stopped-upgrade-110158) DBG | domain stopped-upgrade-110158 has defined IP address 192.168.72.206 and MAC address 52:54:00:3a:62:f4 in network minikube-net
	I0114 11:05:07.100813   31090 main.go:134] libmachine: (stopped-upgrade-110158) Calling .GetSSHPort
	I0114 11:05:07.101011   31090 main.go:134] libmachine: (stopped-upgrade-110158) Calling .GetSSHKeyPath
	I0114 11:05:07.101016   31090 main.go:134] libmachine: (stopped-upgrade-110158) Calling .GetSSHPort
	I0114 11:05:07.101193   31090 main.go:134] libmachine: (stopped-upgrade-110158) Calling .GetSSHKeyPath
	I0114 11:05:07.101215   31090 main.go:134] libmachine: (stopped-upgrade-110158) Calling .GetSSHUsername
	I0114 11:05:07.101413   31090 main.go:134] libmachine: (stopped-upgrade-110158) Calling .GetSSHUsername
	I0114 11:05:07.101448   31090 sshutil.go:53] new ssh client: &{IP:192.168.72.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15642-7076/.minikube/machines/stopped-upgrade-110158/id_rsa Username:docker}
	I0114 11:05:07.101699   31090 sshutil.go:53] new ssh client: &{IP:192.168.72.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15642-7076/.minikube/machines/stopped-upgrade-110158/id_rsa Username:docker}
	W0114 11:05:07.208721   31090 start.go:377] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0114 11:05:07.208795   31090 ssh_runner.go:195] Run: systemctl --version
	I0114 11:05:07.215967   31090 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0114 11:05:07.216198   31090 ssh_runner.go:195] Run: sudo crictl images --output json
	I0114 11:05:11.241837   31090 ssh_runner.go:235] Completed: sudo crictl images --output json: (4.025612825s)
	I0114 11:05:11.241978   31090 containerd.go:549] couldn't find preloaded image for "k8s.gcr.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0114 11:05:11.242043   31090 ssh_runner.go:195] Run: which lz4
	I0114 11:05:11.247108   31090 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0114 11:05:11.251813   31090 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/preloaded.tar.lz4': No such file or directory
	I0114 11:05:11.251845   31090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15642-7076/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (472503869 bytes)
	I0114 11:05:13.549386   31090 containerd.go:496] Took 2.302311 seconds to copy over tarball
	I0114 11:05:13.549444   31090 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0114 11:05:18.182750   31090 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (4.633280993s)
	I0114 11:05:18.182859   31090 containerd.go:503] Took 4.633445 seconds t extract the tarball
	I0114 11:05:18.182888   31090 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0114 11:05:18.234189   31090 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0114 11:05:18.424475   31090 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0114 11:05:18.480538   31090 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0114 11:05:18.536137   31090 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0114 11:05:18.551654   31090 docker.go:189] disabling docker service ...
	I0114 11:05:18.551727   31090 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0114 11:05:18.565515   31090 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0114 11:05:18.578906   31090 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0114 11:05:18.777154   31090 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0114 11:05:18.952488   31090 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0114 11:05:18.966269   31090 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0114 11:05:18.983945   31090 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*sandbox_image = .*$|sandbox_image = "k8s.gcr.io/pause:3.2"|' -i /etc/containerd/config.toml"
	I0114 11:05:18.995044   31090 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*restrict_oom_score_adj = .*$|restrict_oom_score_adj = false|' -i /etc/containerd/config.toml"
	I0114 11:05:19.005173   31090 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*SystemdCgroup = .*$|SystemdCgroup = false|' -i /etc/containerd/config.toml"
	I0114 11:05:19.015662   31090 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*conf_dir = .*$|conf_dir = "/etc/cni/net.d"|' -i /etc/containerd/config.toml"
	I0114 11:05:19.028440   31090 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0114 11:05:19.039655   31090 crio.go:137] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0114 11:05:19.039723   31090 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0114 11:05:19.075813   31090 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0114 11:05:19.084617   31090 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0114 11:05:19.260972   31090 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0114 11:05:21.239607   31090 ssh_runner.go:235] Completed: sudo systemctl restart containerd: (1.97855151s)
	I0114 11:05:21.239639   31090 start.go:451] Will wait 60s for socket path /run/containerd/containerd.sock
	I0114 11:05:21.239691   31090 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0114 11:05:21.246588   31090 retry.go:31] will retry after 1.104660288s: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/run/containerd/containerd.sock': No such file or directory
	I0114 11:05:22.351399   31090 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0114 11:05:22.358329   31090 start.go:472] Will wait 60s for crictl version
	I0114 11:05:22.358406   31090 ssh_runner.go:195] Run: which crictl
	I0114 11:05:22.363629   31090 ssh_runner.go:195] Run: sudo /bin/crictl version
	I0114 11:05:22.395167   31090 retry.go:31] will retry after 14.405090881s: Temporary Error: sudo /bin/crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-01-14T11:05:22Z" level=fatal msg="getting the runtime version: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	I0114 11:05:36.800743   31090 ssh_runner.go:195] Run: sudo /bin/crictl version
	I0114 11:05:36.819971   31090 retry.go:31] will retry after 17.468400798s: Temporary Error: sudo /bin/crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-01-14T11:05:36Z" level=fatal msg="getting the runtime version: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	I0114 11:05:54.290337   31090 ssh_runner.go:195] Run: sudo /bin/crictl version
	I0114 11:05:54.311080   31090 retry.go:31] will retry after 21.098569212s: Temporary Error: sudo /bin/crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-01-14T11:05:54Z" level=fatal msg="getting the runtime version: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	I0114 11:06:15.409954   31090 ssh_runner.go:195] Run: sudo /bin/crictl version
	I0114 11:06:15.433390   31090 out.go:177] 
	W0114 11:06:15.435034   31090 out.go:239] X Exiting due to RUNTIME_ENABLE: Temporary Error: sudo /bin/crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-01-14T11:06:15Z" level=fatal msg="getting the runtime version: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	
	X Exiting due to RUNTIME_ENABLE: Temporary Error: sudo /bin/crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-01-14T11:06:15Z" level=fatal msg="getting the runtime version: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	
	W0114 11:06:15.435059   31090 out.go:239] * 
	* 
	W0114 11:06:15.436295   31090 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0114 11:06:15.437946   31090 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:207: upgrade from v1.16.0 to HEAD failed: out/minikube-linux-amd64 start -p stopped-upgrade-110158 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: exit status 90
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (254.03s)

                                                
                                    

Test pass (262/297)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 29.05
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.08
10 TestDownloadOnly/v1.25.3/json-events 24.47
11 TestDownloadOnly/v1.25.3/preload-exists 0
15 TestDownloadOnly/v1.25.3/LogsDuration 0.09
16 TestDownloadOnly/DeleteAll 0.18
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.16
19 TestBinaryMirror 0.56
20 TestOffline 117.16
22 TestAddons/Setup 148.42
24 TestAddons/parallel/Registry 16.64
25 TestAddons/parallel/Ingress 33.71
26 TestAddons/parallel/MetricsServer 5.6
27 TestAddons/parallel/HelmTiller 13.41
29 TestAddons/parallel/CSI 39.71
30 TestAddons/parallel/Headlamp 12.47
31 TestAddons/parallel/CloudSpanner 5.41
34 TestAddons/serial/GCPAuth/Namespaces 0.14
35 TestAddons/StoppedEnableDisable 92.59
36 TestCertOptions 75.5
37 TestCertExpiration 278.17
39 TestForceSystemdFlag 85.43
40 TestForceSystemdEnv 69.92
41 TestKVMDriverInstallOrUpdate 8.42
45 TestErrorSpam/setup 54.46
46 TestErrorSpam/start 0.41
47 TestErrorSpam/status 0.81
48 TestErrorSpam/pause 1.45
49 TestErrorSpam/unpause 1.57
50 TestErrorSpam/stop 1.55
53 TestFunctional/serial/CopySyncFile 0
54 TestFunctional/serial/StartWithProxy 78.71
55 TestFunctional/serial/AuditLog 0
56 TestFunctional/serial/SoftStart 28.6
57 TestFunctional/serial/KubeContext 0.04
58 TestFunctional/serial/KubectlGetPods 0.09
61 TestFunctional/serial/CacheCmd/cache/add_remote 4.77
62 TestFunctional/serial/CacheCmd/cache/add_local 2.26
63 TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 0.07
64 TestFunctional/serial/CacheCmd/cache/list 0.07
65 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.25
66 TestFunctional/serial/CacheCmd/cache/cache_reload 2.26
67 TestFunctional/serial/CacheCmd/cache/delete 0.14
68 TestFunctional/serial/MinikubeKubectlCmd 0.12
69 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
70 TestFunctional/serial/ExtraConfig 29.98
71 TestFunctional/serial/ComponentHealth 0.07
72 TestFunctional/serial/LogsCmd 1.32
73 TestFunctional/serial/LogsFileCmd 1.35
75 TestFunctional/parallel/ConfigCmd 0.47
76 TestFunctional/parallel/DashboardCmd 29.9
77 TestFunctional/parallel/DryRun 0.32
78 TestFunctional/parallel/InternationalLanguage 0.19
79 TestFunctional/parallel/StatusCmd 0.99
82 TestFunctional/parallel/ServiceCmd 12.87
83 TestFunctional/parallel/ServiceCmdConnect 11.55
84 TestFunctional/parallel/AddonsCmd 0.17
85 TestFunctional/parallel/PersistentVolumeClaim 46.78
87 TestFunctional/parallel/SSHCmd 0.5
88 TestFunctional/parallel/CpCmd 1.02
89 TestFunctional/parallel/MySQL 26.82
90 TestFunctional/parallel/FileSync 0.24
91 TestFunctional/parallel/CertSync 1.8
95 TestFunctional/parallel/NodeLabels 0.08
97 TestFunctional/parallel/NonActiveRuntimeDisabled 0.52
99 TestFunctional/parallel/License 0.16
100 TestFunctional/parallel/Version/short 0.07
101 TestFunctional/parallel/Version/components 1.06
102 TestFunctional/parallel/ImageCommands/ImageListShort 0.3
103 TestFunctional/parallel/ImageCommands/ImageListTable 0.28
104 TestFunctional/parallel/ImageCommands/ImageListJson 0.45
105 TestFunctional/parallel/ImageCommands/ImageListYaml 0.27
106 TestFunctional/parallel/ImageCommands/ImageBuild 4.43
107 TestFunctional/parallel/ImageCommands/Setup 1.33
108 TestFunctional/parallel/ProfileCmd/profile_not_create 0.35
117 TestFunctional/parallel/ProfileCmd/profile_list 0.34
118 TestFunctional/parallel/ProfileCmd/profile_json_output 0.4
119 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.15
120 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 4.11
121 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 6.26
122 TestFunctional/parallel/UpdateContextCmd/no_changes 0.11
123 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.12
124 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.12
125 TestFunctional/parallel/MountCmd/any-port 20.83
126 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.39
127 TestFunctional/parallel/ImageCommands/ImageRemove 0.54
128 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.83
129 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.63
130 TestFunctional/parallel/MountCmd/specific-port 2.01
131 TestFunctional/delete_addon-resizer_images 0.08
132 TestFunctional/delete_my-image_image 0.02
133 TestFunctional/delete_minikube_cached_images 0.02
136 TestIngressAddonLegacy/StartLegacyK8sCluster 99.34
138 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 11.85
139 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.39
140 TestIngressAddonLegacy/serial/ValidateIngressAddons 45.56
143 TestJSONOutput/start/Command 79.42
144 TestJSONOutput/start/Audit 0
146 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
147 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
149 TestJSONOutput/pause/Command 0.62
150 TestJSONOutput/pause/Audit 0
152 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
153 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
155 TestJSONOutput/unpause/Command 0.59
156 TestJSONOutput/unpause/Audit 0
158 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
159 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
161 TestJSONOutput/stop/Command 2.11
162 TestJSONOutput/stop/Audit 0
164 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
165 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
166 TestErrorJSONOutput 0.26
171 TestMainNoArgs 0.07
172 TestMinikubeProfile 112.49
175 TestMountStart/serial/StartWithMountFirst 27.11
176 TestMountStart/serial/VerifyMountFirst 0.42
177 TestMountStart/serial/StartWithMountSecond 27.41
178 TestMountStart/serial/VerifyMountSecond 0.41
179 TestMountStart/serial/DeleteFirst 0.68
180 TestMountStart/serial/VerifyMountPostDelete 0.41
181 TestMountStart/serial/Stop 1.15
182 TestMountStart/serial/RestartStopped 22.2
183 TestMountStart/serial/VerifyMountPostStop 0.42
186 TestMultiNode/serial/FreshStart2Nodes 183.8
187 TestMultiNode/serial/DeployApp2Nodes 5
188 TestMultiNode/serial/PingHostFrom2Pods 0.92
189 TestMultiNode/serial/AddNode 60.34
190 TestMultiNode/serial/ProfileList 0.25
191 TestMultiNode/serial/CopyFile 8.15
192 TestMultiNode/serial/StopNode 2.23
193 TestMultiNode/serial/StartAfterStop 61.17
194 TestMultiNode/serial/RestartKeepsNodes 530.62
195 TestMultiNode/serial/DeleteNode 2.08
196 TestMultiNode/serial/StopMultiNode 183.53
197 TestMultiNode/serial/RestartMultiNode 267.54
198 TestMultiNode/serial/ValidateNameConflict 55.94
205 TestScheduledStopUnix 125.18
211 TestKubernetesUpgrade 227.55
214 TestNoKubernetes/serial/StartNoK8sWithVersion 0.11
215 TestNoKubernetes/serial/StartWithK8s 106.07
216 TestNoKubernetes/serial/StartWithStopK8s 25.5
217 TestStoppedBinaryUpgrade/Setup 2.79
219 TestNoKubernetes/serial/Start 27.5
220 TestNoKubernetes/serial/VerifyK8sNotRunning 0.27
221 TestNoKubernetes/serial/ProfileList 49.81
222 TestNoKubernetes/serial/Stop 1.98
223 TestNoKubernetes/serial/StartNoArgs 26.73
224 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.29
232 TestNetworkPlugins/group/false 0.5
244 TestPause/serial/Start 111.58
245 TestStoppedBinaryUpgrade/MinikubeLogs 0.72
246 TestNetworkPlugins/group/auto/Start 78.04
247 TestPause/serial/SecondStartNoReconfiguration 30.91
248 TestPause/serial/Pause 0.97
249 TestPause/serial/VerifyStatus 0.42
250 TestPause/serial/Unpause 0.95
251 TestPause/serial/PauseAgain 1.02
252 TestPause/serial/DeletePaused 1.25
253 TestPause/serial/VerifyDeletedResources 20.06
254 TestNetworkPlugins/group/kindnet/Start 81.85
255 TestNetworkPlugins/group/auto/KubeletFlags 0.32
256 TestNetworkPlugins/group/cilium/Start 144.02
257 TestNetworkPlugins/group/auto/NetCatPod 11.46
258 TestNetworkPlugins/group/auto/DNS 0.23
259 TestNetworkPlugins/group/auto/Localhost 0.2
260 TestNetworkPlugins/group/auto/HairPin 0.19
261 TestNetworkPlugins/group/calico/Start 375.82
262 TestNetworkPlugins/group/kindnet/ControllerPod 5.02
263 TestNetworkPlugins/group/kindnet/KubeletFlags 0.3
264 TestNetworkPlugins/group/kindnet/NetCatPod 11.43
265 TestNetworkPlugins/group/kindnet/DNS 0.21
266 TestNetworkPlugins/group/kindnet/Localhost 0.19
267 TestNetworkPlugins/group/kindnet/HairPin 0.18
268 TestNetworkPlugins/group/custom-flannel/Start 91.83
269 TestNetworkPlugins/group/cilium/ControllerPod 5.05
270 TestNetworkPlugins/group/cilium/KubeletFlags 0.29
271 TestNetworkPlugins/group/cilium/NetCatPod 12.54
272 TestNetworkPlugins/group/cilium/DNS 0.3
273 TestNetworkPlugins/group/cilium/Localhost 0.19
274 TestNetworkPlugins/group/cilium/HairPin 0.19
275 TestNetworkPlugins/group/enable-default-cni/Start 129.81
276 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.26
277 TestNetworkPlugins/group/custom-flannel/NetCatPod 22.37
278 TestNetworkPlugins/group/custom-flannel/DNS 0.23
279 TestNetworkPlugins/group/custom-flannel/Localhost 0.22
280 TestNetworkPlugins/group/custom-flannel/HairPin 0.2
281 TestNetworkPlugins/group/flannel/Start 75.86
282 TestNetworkPlugins/group/flannel/ControllerPod 7.02
283 TestNetworkPlugins/group/flannel/KubeletFlags 0.26
284 TestNetworkPlugins/group/flannel/NetCatPod 11.44
285 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.28
286 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.42
287 TestNetworkPlugins/group/flannel/DNS 0.21
288 TestNetworkPlugins/group/flannel/Localhost 0.19
289 TestNetworkPlugins/group/flannel/HairPin 0.2
290 TestNetworkPlugins/group/bridge/Start 112.77
291 TestNetworkPlugins/group/enable-default-cni/DNS 0.21
292 TestNetworkPlugins/group/enable-default-cni/Localhost 0.21
293 TestNetworkPlugins/group/enable-default-cni/HairPin 0.19
295 TestStartStop/group/old-k8s-version/serial/FirstStart 161.63
296 TestNetworkPlugins/group/calico/ControllerPod 5.03
297 TestNetworkPlugins/group/calico/KubeletFlags 0.26
298 TestNetworkPlugins/group/calico/NetCatPod 11.53
299 TestNetworkPlugins/group/calico/DNS 0.34
300 TestNetworkPlugins/group/calico/Localhost 0.19
301 TestNetworkPlugins/group/calico/HairPin 0.22
303 TestStartStop/group/no-preload/serial/FirstStart 96.18
304 TestNetworkPlugins/group/bridge/KubeletFlags 0.3
305 TestNetworkPlugins/group/bridge/NetCatPod 13.39
306 TestNetworkPlugins/group/bridge/DNS 0.19
307 TestNetworkPlugins/group/bridge/Localhost 0.17
308 TestNetworkPlugins/group/bridge/HairPin 0.16
310 TestStartStop/group/embed-certs/serial/FirstStart 139.67
311 TestStartStop/group/old-k8s-version/serial/DeployApp 11.58
312 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 3.94
313 TestStartStop/group/old-k8s-version/serial/Stop 102.49
314 TestStartStop/group/no-preload/serial/DeployApp 8.47
315 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.19
316 TestStartStop/group/no-preload/serial/Stop 92.45
317 TestStartStop/group/embed-certs/serial/DeployApp 9.44
318 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.08
319 TestStartStop/group/embed-certs/serial/Stop 102.46
320 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.25
321 TestStartStop/group/old-k8s-version/serial/SecondStart 519.44
322 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.23
323 TestStartStop/group/no-preload/serial/SecondStart 351.57
324 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.24
325 TestStartStop/group/embed-certs/serial/SecondStart 424.86
326 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 13.02
327 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.13
328 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.31
329 TestStartStop/group/no-preload/serial/Pause 2.84
331 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 81.58
332 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.41
333 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.95
334 TestStartStop/group/default-k8s-diff-port/serial/Stop 92.47
335 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 17.02
336 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.02
337 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.1
338 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.28
339 TestStartStop/group/old-k8s-version/serial/Pause 2.7
341 TestStartStop/group/newest-cni/serial/FirstStart 70.03
342 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.1
343 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.28
344 TestStartStop/group/embed-certs/serial/Pause 2.98
345 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.21
346 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 416.6
347 TestStartStop/group/newest-cni/serial/DeployApp 0
348 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.02
349 TestStartStop/group/newest-cni/serial/Stop 2.13
350 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.24
351 TestStartStop/group/newest-cni/serial/SecondStart 76.45
352 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
353 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
354 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.25
355 TestStartStop/group/newest-cni/serial/Pause 2.25
356 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 16.02
357 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.08
358 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.24
359 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.47
x
+
TestDownloadOnly/v1.16.0/json-events (29.05s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-100605 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd
aaa_download_only_test.go:71: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-100605 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd: (29.051514341s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (29.05s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-100605
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-100605: exit status 85 (83.057405ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-100605 | jenkins | v1.28.0 | 14 Jan 23 10:06 UTC |          |
	|         | -p download-only-100605        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/01/14 10:06:05
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.19.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0114 10:06:05.260110   13933 out.go:296] Setting OutFile to fd 1 ...
	I0114 10:06:05.260229   13933 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0114 10:06:05.260237   13933 out.go:309] Setting ErrFile to fd 2...
	I0114 10:06:05.260242   13933 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0114 10:06:05.260352   13933 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15642-7076/.minikube/bin
	W0114 10:06:05.260460   13933 root.go:311] Error reading config file at /home/jenkins/minikube-integration/15642-7076/.minikube/config/config.json: open /home/jenkins/minikube-integration/15642-7076/.minikube/config/config.json: no such file or directory
	I0114 10:06:05.260973   13933 out.go:303] Setting JSON to true
	I0114 10:06:05.261829   13933 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":2913,"bootTime":1673687853,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1027-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0114 10:06:05.261884   13933 start.go:135] virtualization: kvm guest
	I0114 10:06:05.264672   13933 out.go:97] [download-only-100605] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)
	W0114 10:06:05.264755   13933 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/15642-7076/.minikube/cache/preloaded-tarball: no such file or directory
	I0114 10:06:05.264787   13933 notify.go:220] Checking for updates...
	I0114 10:06:05.266396   13933 out.go:169] MINIKUBE_LOCATION=15642
	I0114 10:06:05.267984   13933 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0114 10:06:05.269465   13933 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/15642-7076/kubeconfig
	I0114 10:06:05.270892   13933 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/15642-7076/.minikube
	I0114 10:06:05.272443   13933 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0114 10:06:05.275186   13933 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0114 10:06:05.275367   13933 driver.go:365] Setting default libvirt URI to qemu:///system
	I0114 10:06:05.388131   13933 out.go:97] Using the kvm2 driver based on user configuration
	I0114 10:06:05.388147   13933 start.go:294] selected driver: kvm2
	I0114 10:06:05.388159   13933 start.go:838] validating driver "kvm2" against <nil>
	I0114 10:06:05.388424   13933 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0114 10:06:05.388629   13933 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/15642-7076/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0114 10:06:05.403347   13933 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.28.0
	I0114 10:06:05.403416   13933 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0114 10:06:05.403846   13933 start_flags.go:386] Using suggested 6000MB memory alloc based on sys=32101MB, container=0MB
	I0114 10:06:05.403938   13933 start_flags.go:899] Wait components to verify : map[apiserver:true system_pods:true]
	I0114 10:06:05.403966   13933 cni.go:95] Creating CNI manager for ""
	I0114 10:06:05.403973   13933 cni.go:165] "kvm2" driver + containerd runtime found, recommending bridge
	I0114 10:06:05.403981   13933 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0114 10:06:05.403988   13933 start_flags.go:319] config:
	{Name:download-only-100605 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-100605 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunt
ime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0114 10:06:05.404157   13933 iso.go:125] acquiring lock: {Name:mk2d30b3fe95e944ec3a455ef50a6daa83b559c3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0114 10:06:05.406293   13933 out.go:97] Downloading VM boot image ...
	I0114 10:06:05.406325   13933 download.go:101] Downloading: https://storage.googleapis.com/minikube-builds/iso/15235/minikube-v1.28.0-1668700269-15235-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/15235/minikube-v1.28.0-1668700269-15235-amd64.iso.sha256 -> /home/jenkins/minikube-integration/15642-7076/.minikube/cache/iso/amd64/minikube-v1.28.0-1668700269-15235-amd64.iso
	I0114 10:06:16.136569   13933 out.go:97] Starting control plane node download-only-100605 in cluster download-only-100605
	I0114 10:06:16.136595   13933 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I0114 10:06:16.245038   13933 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4
	I0114 10:06:16.245112   13933 cache.go:57] Caching tarball of preloaded images
	I0114 10:06:16.245311   13933 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I0114 10:06:16.247275   13933 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0114 10:06:16.247293   13933 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4 ...
	I0114 10:06:16.794073   13933 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4?checksum=md5:d96a2b2afa188e17db7ddabb58d563fd -> /home/jenkins/minikube-integration/15642-7076/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-100605"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.25.3/json-events (24.47s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.25.3/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-100605 --force --alsologtostderr --kubernetes-version=v1.25.3 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd
aaa_download_only_test.go:71: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-100605 --force --alsologtostderr --kubernetes-version=v1.25.3 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd: (24.472731005s)
--- PASS: TestDownloadOnly/v1.25.3/json-events (24.47s)

                                                
                                    
x
+
TestDownloadOnly/v1.25.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.25.3/preload-exists
--- PASS: TestDownloadOnly/v1.25.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.25.3/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.25.3/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-100605
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-100605: exit status 85 (84.946131ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-100605 | jenkins | v1.28.0 | 14 Jan 23 10:06 UTC |          |
	|         | -p download-only-100605        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-100605 | jenkins | v1.28.0 | 14 Jan 23 10:06 UTC |          |
	|         | -p download-only-100605        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.25.3   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/01/14 10:06:34
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.19.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0114 10:06:34.397659   13968 out.go:296] Setting OutFile to fd 1 ...
	I0114 10:06:34.397789   13968 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0114 10:06:34.397799   13968 out.go:309] Setting ErrFile to fd 2...
	I0114 10:06:34.397804   13968 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0114 10:06:34.397912   13968 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15642-7076/.minikube/bin
	W0114 10:06:34.398070   13968 root.go:311] Error reading config file at /home/jenkins/minikube-integration/15642-7076/.minikube/config/config.json: open /home/jenkins/minikube-integration/15642-7076/.minikube/config/config.json: no such file or directory
	I0114 10:06:34.398495   13968 out.go:303] Setting JSON to true
	I0114 10:06:34.399256   13968 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":2942,"bootTime":1673687853,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1027-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0114 10:06:34.399312   13968 start.go:135] virtualization: kvm guest
	I0114 10:06:34.401760   13968 out.go:97] [download-only-100605] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)
	I0114 10:06:34.401835   13968 notify.go:220] Checking for updates...
	I0114 10:06:34.403396   13968 out.go:169] MINIKUBE_LOCATION=15642
	I0114 10:06:34.404893   13968 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0114 10:06:34.406406   13968 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/15642-7076/kubeconfig
	I0114 10:06:34.407837   13968 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/15642-7076/.minikube
	I0114 10:06:34.409265   13968 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0114 10:06:34.411866   13968 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0114 10:06:34.412201   13968 config.go:180] Loaded profile config "download-only-100605": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	W0114 10:06:34.412260   13968 start.go:746] api.Load failed for download-only-100605: filestore "download-only-100605": Docker machine "download-only-100605" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0114 10:06:34.412304   13968 driver.go:365] Setting default libvirt URI to qemu:///system
	W0114 10:06:34.412336   13968 start.go:746] api.Load failed for download-only-100605: filestore "download-only-100605": Docker machine "download-only-100605" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0114 10:06:34.444160   13968 out.go:97] Using the kvm2 driver based on existing profile
	I0114 10:06:34.444177   13968 start.go:294] selected driver: kvm2
	I0114 10:06:34.444187   13968 start.go:838] validating driver "kvm2" against &{Name:download-only-100605 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15235/minikube-v1.28.0-1668700269-15235-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.16.0 ClusterName:download-only-100605 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0114 10:06:34.444532   13968 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0114 10:06:34.444710   13968 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/15642-7076/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0114 10:06:34.459543   13968 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.28.0
	I0114 10:06:34.460244   13968 cni.go:95] Creating CNI manager for ""
	I0114 10:06:34.460258   13968 cni.go:165] "kvm2" driver + containerd runtime found, recommending bridge
	I0114 10:06:34.460275   13968 start_flags.go:319] config:
	{Name:download-only-100605 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15235/minikube-v1.28.0-1668700269-15235-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:download-only-100605 Namespace:defa
ult APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwar
ePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0114 10:06:34.460426   13968 iso.go:125] acquiring lock: {Name:mk2d30b3fe95e944ec3a455ef50a6daa83b559c3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0114 10:06:34.462257   13968 out.go:97] Starting control plane node download-only-100605 in cluster download-only-100605
	I0114 10:06:34.462270   13968 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime containerd
	I0114 10:06:34.959628   13968 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.25.3/preloaded-images-k8s-v18-v1.25.3-containerd-overlay2-amd64.tar.lz4
	I0114 10:06:34.959664   13968 cache.go:57] Caching tarball of preloaded images
	I0114 10:06:34.959852   13968 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime containerd
	I0114 10:06:34.961974   13968 out.go:97] Downloading Kubernetes v1.25.3 preload ...
	I0114 10:06:34.961991   13968 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.25.3-containerd-overlay2-amd64.tar.lz4 ...
	I0114 10:06:35.512887   13968 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.25.3/preloaded-images-k8s-v18-v1.25.3-containerd-overlay2-amd64.tar.lz4?checksum=md5:60f9fee056da17edf086af60afca6341 -> /home/jenkins/minikube-integration/15642-7076/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-containerd-overlay2-amd64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-100605"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.25.3/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.18s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:191: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.18s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:203: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-100605
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.16s)

                                                
                                    
x
+
TestBinaryMirror (0.56s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:310: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-100659 --alsologtostderr --binary-mirror http://127.0.0.1:41409 --driver=kvm2  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-100659" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-100659
--- PASS: TestBinaryMirror (0.56s)

                                                
                                    
x
+
TestOffline (117.16s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-containerd-110001 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=containerd

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-containerd-110001 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=containerd: (1m56.064359023s)
helpers_test.go:175: Cleaning up "offline-containerd-110001" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-containerd-110001
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-containerd-110001: (1.096547264s)
--- PASS: TestOffline (117.16s)

                                                
                                    
x
+
TestAddons/Setup (148.42s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p addons-100659 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --driver=kvm2  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p addons-100659 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --driver=kvm2  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m28.419387877s)
--- PASS: TestAddons/Setup (148.42s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.64s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:287: registry stabilized in 20.168845ms

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:289: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/Registry
helpers_test.go:342: "registry-n7bzm" [d7879eb3-5fb5-4f6c-a9ef-dc4217f46e6e] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:289: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.014133421s
addons_test.go:292: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:342: "registry-proxy-42ff6" [b49019bd-e2ef-4e49-a7fb-7f4a2b013db0] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:292: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.048359308s
addons_test.go:297: (dbg) Run:  kubectl --context addons-100659 delete po -l run=registry-test --now
addons_test.go:302: (dbg) Run:  kubectl --context addons-100659 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:302: (dbg) Done: kubectl --context addons-100659 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.907358023s)
addons_test.go:316: (dbg) Run:  out/minikube-linux-amd64 -p addons-100659 ip
2023/01/14 10:09:44 [DEBUG] GET http://192.168.39.106:5000
addons_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p addons-100659 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.64s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (33.71s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:169: (dbg) Run:  kubectl --context addons-100659 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:169: (dbg) Done: kubectl --context addons-100659 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (10.014892879s)
addons_test.go:189: (dbg) Run:  kubectl --context addons-100659 replace --force -f testdata/nginx-ingress-v1.yaml

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:202: (dbg) Run:  kubectl --context addons-100659 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:207: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:342: "nginx" [9a3d8dc2-8935-43c7-940a-d45c354958e3] Pending
helpers_test.go:342: "nginx" [9a3d8dc2-8935-43c7-940a-d45c354958e3] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
helpers_test.go:342: "nginx" [9a3d8dc2-8935-43c7-940a-d45c354958e3] Running

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 13.016351837s
addons_test.go:219: (dbg) Run:  out/minikube-linux-amd64 -p addons-100659 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:243: (dbg) Run:  kubectl --context addons-100659 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p addons-100659 ip
addons_test.go:254: (dbg) Run:  nslookup hello-john.test 192.168.39.106
addons_test.go:263: (dbg) Run:  out/minikube-linux-amd64 -p addons-100659 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:263: (dbg) Done: out/minikube-linux-amd64 -p addons-100659 addons disable ingress-dns --alsologtostderr -v=1: (1.794665937s)
addons_test.go:268: (dbg) Run:  out/minikube-linux-amd64 -p addons-100659 addons disable ingress --alsologtostderr -v=1

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:268: (dbg) Done: out/minikube-linux-amd64 -p addons-100659 addons disable ingress --alsologtostderr -v=1: (7.572088091s)
--- PASS: TestAddons/parallel/Ingress (33.71s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.6s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:364: metrics-server stabilized in 20.038538ms
addons_test.go:366: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
helpers_test.go:342: "metrics-server-56c6cfbdd9-kxflw" [d7ed2ec8-7751-4b67-bf62-2fe6ab2f74d8] Running

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:366: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.020726082s
addons_test.go:372: (dbg) Run:  kubectl --context addons-100659 top pods -n kube-system

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:389: (dbg) Run:  out/minikube-linux-amd64 -p addons-100659 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.60s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (13.41s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:413: tiller-deploy stabilized in 20.078209ms

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:415: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
helpers_test.go:342: "tiller-deploy-696b5bfbb7-pv88x" [c5c74463-7878-44d2-80f1-2ece6639c3f6] Running

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:415: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.022764162s
addons_test.go:430: (dbg) Run:  kubectl --context addons-100659 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:430: (dbg) Done: kubectl --context addons-100659 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version: (7.942796757s)
addons_test.go:435: kubectl --context addons-100659 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version: unexpected stderr: Unable to use a TTY - input is not a terminal or the right kind of file
If you don't see a command prompt, try pressing enter.
warning: couldn't attach to pod/helm-test, falling back to streaming logs: 
addons_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p addons-100659 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (13.41s)

                                                
                                    
x
+
TestAddons/parallel/CSI (39.71s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:518: csi-hostpath-driver pods stabilized in 6.85927ms
addons_test.go:521: (dbg) Run:  kubectl --context addons-100659 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:526: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:392: (dbg) Run:  kubectl --context addons-100659 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:392: (dbg) Run:  kubectl --context addons-100659 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:531: (dbg) Run:  kubectl --context addons-100659 create -f testdata/csi-hostpath-driver/pv-pod.yaml

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:536: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:342: "task-pv-pod" [b189abdc-d25b-42bf-a120-7e962f48b50f] Pending

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:342: "task-pv-pod" [b189abdc-d25b-42bf-a120-7e962f48b50f] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:342: "task-pv-pod" [b189abdc-d25b-42bf-a120-7e962f48b50f] Running
addons_test.go:536: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 15.010582055s
addons_test.go:541: (dbg) Run:  kubectl --context addons-100659 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:546: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:417: (dbg) Run:  kubectl --context addons-100659 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:417: (dbg) Run:  kubectl --context addons-100659 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:551: (dbg) Run:  kubectl --context addons-100659 delete pod task-pv-pod
addons_test.go:557: (dbg) Run:  kubectl --context addons-100659 delete pvc hpvc
addons_test.go:563: (dbg) Run:  kubectl --context addons-100659 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:568: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:392: (dbg) Run:  kubectl --context addons-100659 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:573: (dbg) Run:  kubectl --context addons-100659 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:578: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:342: "task-pv-pod-restore" [4c791042-8022-4f1a-b26e-f7caee355be3] Pending
helpers_test.go:342: "task-pv-pod-restore" [4c791042-8022-4f1a-b26e-f7caee355be3] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:342: "task-pv-pod-restore" [4c791042-8022-4f1a-b26e-f7caee355be3] Running
addons_test.go:578: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 12.021032815s
addons_test.go:583: (dbg) Run:  kubectl --context addons-100659 delete pod task-pv-pod-restore
addons_test.go:587: (dbg) Run:  kubectl --context addons-100659 delete pvc hpvc-restore
addons_test.go:591: (dbg) Run:  kubectl --context addons-100659 delete volumesnapshot new-snapshot-demo
addons_test.go:595: (dbg) Run:  out/minikube-linux-amd64 -p addons-100659 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:595: (dbg) Done: out/minikube-linux-amd64 -p addons-100659 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.877554164s)
addons_test.go:599: (dbg) Run:  out/minikube-linux-amd64 -p addons-100659 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (39.71s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (12.47s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:774: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-100659 --alsologtostderr -v=1
addons_test.go:774: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-100659 --alsologtostderr -v=1: (1.44891015s)
addons_test.go:779: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:342: "headlamp-764769c887-9t7lz" [de52575d-4fe7-4154-89fb-9f9d2c9e80ae] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
helpers_test.go:342: "headlamp-764769c887-9t7lz" [de52575d-4fe7-4154-89fb-9f9d2c9e80ae] Running

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:779: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.015942107s
--- PASS: TestAddons/parallel/Headlamp (12.47s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.41s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:795: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:342: "cloud-spanner-emulator-7d7766f55c-2d4mj" [c2140b08-d98e-4b37-8aca-1e88121345b5] Running

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:795: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.007028648s
addons_test.go:798: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-100659
--- PASS: TestAddons/parallel/CloudSpanner (5.41s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:607: (dbg) Run:  kubectl --context addons-100659 create ns new-namespace
addons_test.go:621: (dbg) Run:  kubectl --context addons-100659 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (92.59s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:139: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-100659
addons_test.go:139: (dbg) Done: out/minikube-linux-amd64 stop -p addons-100659: (1m32.371340816s)
addons_test.go:143: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-100659
addons_test.go:147: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-100659
--- PASS: TestAddons/StoppedEnableDisable (92.59s)

                                                
                                    
x
+
TestCertOptions (75.5s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-110458 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=containerd

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-110458 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=containerd: (1m13.486068298s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-110458 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-110458 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-110458 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-110458" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-110458
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-110458: (1.214802421s)
--- PASS: TestCertOptions (75.50s)

                                                
                                    
x
+
TestCertExpiration (278.17s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-110409 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=containerd

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-110409 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=containerd: (1m18.106235299s)

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-110409 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=containerd

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-110409 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=containerd: (18.889212881s)
helpers_test.go:175: Cleaning up "cert-expiration-110409" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-110409
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-110409: (1.176280244s)
--- PASS: TestCertExpiration (278.17s)

                                                
                                    
x
+
TestForceSystemdFlag (85.43s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-110618 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd
E0114 11:06:36.136272   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/ingress-addon-legacy-102444/client.crt: no such file or directory
docker_test.go:85: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-110618 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (1m24.022425796s)
docker_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-110618 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-110618" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-110618
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-110618: (1.154913603s)
--- PASS: TestForceSystemdFlag (85.43s)

                                                
                                    
x
+
TestForceSystemdEnv (69.92s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:149: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-110349 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd
E0114 11:03:52.030828   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/functional-102121/client.crt: no such file or directory

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:149: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-110349 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (1m8.289847669s)
docker_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-110349 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-110349" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-110349
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-110349: (1.289152792s)
--- PASS: TestForceSystemdEnv (69.92s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (8.42s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (8.42s)

                                                
                                    
x
+
TestErrorSpam/setup (54.46s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-102020 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-102020 --driver=kvm2  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-102020 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-102020 --driver=kvm2  --container-runtime=containerd: (54.460114037s)
--- PASS: TestErrorSpam/setup (54.46s)

                                                
                                    
x
+
TestErrorSpam/start (0.41s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-102020 --log_dir /tmp/nospam-102020 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-102020 --log_dir /tmp/nospam-102020 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-102020 --log_dir /tmp/nospam-102020 start --dry-run
--- PASS: TestErrorSpam/start (0.41s)

                                                
                                    
x
+
TestErrorSpam/status (0.81s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-102020 --log_dir /tmp/nospam-102020 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-102020 --log_dir /tmp/nospam-102020 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-102020 --log_dir /tmp/nospam-102020 status
--- PASS: TestErrorSpam/status (0.81s)

                                                
                                    
x
+
TestErrorSpam/pause (1.45s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-102020 --log_dir /tmp/nospam-102020 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-102020 --log_dir /tmp/nospam-102020 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-102020 --log_dir /tmp/nospam-102020 pause
--- PASS: TestErrorSpam/pause (1.45s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.57s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-102020 --log_dir /tmp/nospam-102020 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-102020 --log_dir /tmp/nospam-102020 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-102020 --log_dir /tmp/nospam-102020 unpause
--- PASS: TestErrorSpam/unpause (1.57s)

                                                
                                    
x
+
TestErrorSpam/stop (1.55s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-102020 --log_dir /tmp/nospam-102020 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-102020 --log_dir /tmp/nospam-102020 stop: (1.36295811s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-102020 --log_dir /tmp/nospam-102020 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-102020 --log_dir /tmp/nospam-102020 stop
--- PASS: TestErrorSpam/stop (1.55s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1782: local sync path: /home/jenkins/minikube-integration/15642-7076/.minikube/files/etc/test/nested/copy/13921/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (78.71s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2161: (dbg) Run:  out/minikube-linux-amd64 start -p functional-102121 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=containerd
functional_test.go:2161: (dbg) Done: out/minikube-linux-amd64 start -p functional-102121 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=containerd: (1m18.708396982s)
--- PASS: TestFunctional/serial/StartWithProxy (78.71s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (28.6s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:652: (dbg) Run:  out/minikube-linux-amd64 start -p functional-102121 --alsologtostderr -v=8
functional_test.go:652: (dbg) Done: out/minikube-linux-amd64 start -p functional-102121 --alsologtostderr -v=8: (28.602285167s)
functional_test.go:656: soft start took 28.602889931s for "functional-102121" cluster.
--- PASS: TestFunctional/serial/SoftStart (28.60s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:674: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:689: (dbg) Run:  kubectl --context functional-102121 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.77s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1042: (dbg) Run:  out/minikube-linux-amd64 -p functional-102121 cache add k8s.gcr.io/pause:3.1
functional_test.go:1042: (dbg) Done: out/minikube-linux-amd64 -p functional-102121 cache add k8s.gcr.io/pause:3.1: (1.707425755s)
functional_test.go:1042: (dbg) Run:  out/minikube-linux-amd64 -p functional-102121 cache add k8s.gcr.io/pause:3.3
functional_test.go:1042: (dbg) Done: out/minikube-linux-amd64 -p functional-102121 cache add k8s.gcr.io/pause:3.3: (1.657577554s)
functional_test.go:1042: (dbg) Run:  out/minikube-linux-amd64 -p functional-102121 cache add k8s.gcr.io/pause:latest
functional_test.go:1042: (dbg) Done: out/minikube-linux-amd64 -p functional-102121 cache add k8s.gcr.io/pause:latest: (1.405287911s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.77s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.26s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1070: (dbg) Run:  docker build -t minikube-local-cache-test:functional-102121 /tmp/TestFunctionalserialCacheCmdcacheadd_local290610122/001
functional_test.go:1082: (dbg) Run:  out/minikube-linux-amd64 -p functional-102121 cache add minikube-local-cache-test:functional-102121
functional_test.go:1082: (dbg) Done: out/minikube-linux-amd64 -p functional-102121 cache add minikube-local-cache-test:functional-102121: (2.017518406s)
functional_test.go:1087: (dbg) Run:  out/minikube-linux-amd64 -p functional-102121 cache delete minikube-local-cache-test:functional-102121
functional_test.go:1076: (dbg) Run:  docker rmi minikube-local-cache-test:functional-102121
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.26s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3
functional_test.go:1095: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1103: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.25s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 -p functional-102121 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.25s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.26s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1140: (dbg) Run:  out/minikube-linux-amd64 -p functional-102121 ssh sudo crictl rmi k8s.gcr.io/pause:latest
functional_test.go:1146: (dbg) Run:  out/minikube-linux-amd64 -p functional-102121 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
functional_test.go:1146: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-102121 ssh sudo crictl inspecti k8s.gcr.io/pause:latest: exit status 1 (240.456672ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "k8s.gcr.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1151: (dbg) Run:  out/minikube-linux-amd64 -p functional-102121 cache reload
functional_test.go:1151: (dbg) Done: out/minikube-linux-amd64 -p functional-102121 cache reload: (1.521103647s)
functional_test.go:1156: (dbg) Run:  out/minikube-linux-amd64 -p functional-102121 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.26s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1165: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:3.1
functional_test.go:1165: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:709: (dbg) Run:  out/minikube-linux-amd64 -p functional-102121 kubectl -- --context functional-102121 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:734: (dbg) Run:  out/kubectl --context functional-102121 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (29.98s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:750: (dbg) Run:  out/minikube-linux-amd64 start -p functional-102121 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:750: (dbg) Done: out/minikube-linux-amd64 start -p functional-102121 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (29.978367809s)
functional_test.go:754: restart took 29.978482071s for "functional-102121" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (29.98s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:803: (dbg) Run:  kubectl --context functional-102121 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:818: etcd phase: Running
functional_test.go:828: etcd status: Ready
functional_test.go:818: kube-apiserver phase: Running
functional_test.go:828: kube-apiserver status: Ready
functional_test.go:818: kube-controller-manager phase: Running
functional_test.go:828: kube-controller-manager status: Ready
functional_test.go:818: kube-scheduler phase: Running
functional_test.go:828: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.32s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1229: (dbg) Run:  out/minikube-linux-amd64 -p functional-102121 logs
functional_test.go:1229: (dbg) Done: out/minikube-linux-amd64 -p functional-102121 logs: (1.319984422s)
--- PASS: TestFunctional/serial/LogsCmd (1.32s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.35s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1243: (dbg) Run:  out/minikube-linux-amd64 -p functional-102121 logs --file /tmp/TestFunctionalserialLogsFileCmd4101283646/001/logs.txt
functional_test.go:1243: (dbg) Done: out/minikube-linux-amd64 -p functional-102121 logs --file /tmp/TestFunctionalserialLogsFileCmd4101283646/001/logs.txt: (1.347954032s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.35s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1192: (dbg) Run:  out/minikube-linux-amd64 -p functional-102121 config unset cpus
functional_test.go:1192: (dbg) Run:  out/minikube-linux-amd64 -p functional-102121 config get cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1192: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-102121 config get cpus: exit status 14 (79.718201ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1192: (dbg) Run:  out/minikube-linux-amd64 -p functional-102121 config set cpus 2
functional_test.go:1192: (dbg) Run:  out/minikube-linux-amd64 -p functional-102121 config get cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1192: (dbg) Run:  out/minikube-linux-amd64 -p functional-102121 config unset cpus
functional_test.go:1192: (dbg) Run:  out/minikube-linux-amd64 -p functional-102121 config get cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1192: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-102121 config get cpus: exit status 14 (73.67519ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (29.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:898: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-102121 --alsologtostderr -v=1]

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:903: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-102121 --alsologtostderr -v=1] ...
helpers_test.go:506: unable to kill pid 19455: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (29.90s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:967: (dbg) Run:  out/minikube-linux-amd64 start -p functional-102121 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd
functional_test.go:967: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-102121 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd: exit status 23 (156.845034ms)

                                                
                                                
-- stdout --
	* [functional-102121] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15642
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15642-7076/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15642-7076/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0114 10:24:13.052827   19343 out.go:296] Setting OutFile to fd 1 ...
	I0114 10:24:13.052937   19343 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0114 10:24:13.052946   19343 out.go:309] Setting ErrFile to fd 2...
	I0114 10:24:13.052950   19343 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0114 10:24:13.053039   19343 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15642-7076/.minikube/bin
	I0114 10:24:13.053542   19343 out.go:303] Setting JSON to false
	I0114 10:24:13.054507   19343 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":4000,"bootTime":1673687853,"procs":256,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1027-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0114 10:24:13.054565   19343 start.go:135] virtualization: kvm guest
	I0114 10:24:13.057060   19343 out.go:177] * [functional-102121] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)
	I0114 10:24:13.058727   19343 out.go:177]   - MINIKUBE_LOCATION=15642
	I0114 10:24:13.058678   19343 notify.go:220] Checking for updates...
	I0114 10:24:13.061947   19343 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0114 10:24:13.063766   19343 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/15642-7076/kubeconfig
	I0114 10:24:13.065305   19343 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/15642-7076/.minikube
	I0114 10:24:13.067018   19343 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0114 10:24:13.069103   19343 config.go:180] Loaded profile config "functional-102121": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.25.3
	I0114 10:24:13.069648   19343 main.go:134] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0114 10:24:13.069729   19343 main.go:134] libmachine: Launching plugin server for driver kvm2
	I0114 10:24:13.085034   19343 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:43947
	I0114 10:24:13.085380   19343 main.go:134] libmachine: () Calling .GetVersion
	I0114 10:24:13.085889   19343 main.go:134] libmachine: Using API Version  1
	I0114 10:24:13.085912   19343 main.go:134] libmachine: () Calling .SetConfigRaw
	I0114 10:24:13.086260   19343 main.go:134] libmachine: () Calling .GetMachineName
	I0114 10:24:13.086436   19343 main.go:134] libmachine: (functional-102121) Calling .DriverName
	I0114 10:24:13.086608   19343 driver.go:365] Setting default libvirt URI to qemu:///system
	I0114 10:24:13.086871   19343 main.go:134] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0114 10:24:13.086903   19343 main.go:134] libmachine: Launching plugin server for driver kvm2
	I0114 10:24:13.101643   19343 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:33787
	I0114 10:24:13.102007   19343 main.go:134] libmachine: () Calling .GetVersion
	I0114 10:24:13.102507   19343 main.go:134] libmachine: Using API Version  1
	I0114 10:24:13.102537   19343 main.go:134] libmachine: () Calling .SetConfigRaw
	I0114 10:24:13.102849   19343 main.go:134] libmachine: () Calling .GetMachineName
	I0114 10:24:13.103026   19343 main.go:134] libmachine: (functional-102121) Calling .DriverName
	I0114 10:24:13.135568   19343 out.go:177] * Using the kvm2 driver based on existing profile
	I0114 10:24:13.136906   19343 start.go:294] selected driver: kvm2
	I0114 10:24:13.136923   19343 start.go:838] validating driver "kvm2" against &{Name:functional-102121 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15235/minikube-v1.28.0-1668700269-15235-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.25.3 ClusterName:functional-102121 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.39.238 Port:8441 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics
-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0114 10:24:13.137046   19343 start.go:849] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0114 10:24:13.139435   19343 out.go:177] 
	W0114 10:24:13.140979   19343 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0114 10:24:13.142515   19343 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:984: (dbg) Run:  out/minikube-linux-amd64 start -p functional-102121 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1013: (dbg) Run:  out/minikube-linux-amd64 start -p functional-102121 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1013: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-102121 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd: exit status 23 (187.575265ms)

                                                
                                                
-- stdout --
	* [functional-102121] minikube v1.28.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15642
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15642-7076/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15642-7076/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0114 10:24:05.900318   18834 out.go:296] Setting OutFile to fd 1 ...
	I0114 10:24:05.900520   18834 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0114 10:24:05.900531   18834 out.go:309] Setting ErrFile to fd 2...
	I0114 10:24:05.900537   18834 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0114 10:24:05.900751   18834 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15642-7076/.minikube/bin
	I0114 10:24:05.901310   18834 out.go:303] Setting JSON to false
	I0114 10:24:05.902424   18834 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":3993,"bootTime":1673687853,"procs":234,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1027-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0114 10:24:05.902483   18834 start.go:135] virtualization: kvm guest
	I0114 10:24:05.904800   18834 out.go:177] * [functional-102121] minikube v1.28.0 sur Ubuntu 20.04 (kvm/amd64)
	I0114 10:24:05.906851   18834 out.go:177]   - MINIKUBE_LOCATION=15642
	I0114 10:24:05.906780   18834 notify.go:220] Checking for updates...
	I0114 10:24:05.908445   18834 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0114 10:24:05.910030   18834 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/15642-7076/kubeconfig
	I0114 10:24:05.911568   18834 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/15642-7076/.minikube
	I0114 10:24:05.912991   18834 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0114 10:24:05.914941   18834 config.go:180] Loaded profile config "functional-102121": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.25.3
	I0114 10:24:05.915472   18834 main.go:134] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0114 10:24:05.915528   18834 main.go:134] libmachine: Launching plugin server for driver kvm2
	I0114 10:24:05.937227   18834 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:44867
	I0114 10:24:05.937546   18834 main.go:134] libmachine: () Calling .GetVersion
	I0114 10:24:05.938001   18834 main.go:134] libmachine: Using API Version  1
	I0114 10:24:05.938021   18834 main.go:134] libmachine: () Calling .SetConfigRaw
	I0114 10:24:05.938282   18834 main.go:134] libmachine: () Calling .GetMachineName
	I0114 10:24:05.938380   18834 main.go:134] libmachine: (functional-102121) Calling .DriverName
	I0114 10:24:05.938510   18834 driver.go:365] Setting default libvirt URI to qemu:///system
	I0114 10:24:05.938770   18834 main.go:134] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0114 10:24:05.938801   18834 main.go:134] libmachine: Launching plugin server for driver kvm2
	I0114 10:24:05.953827   18834 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:35181
	I0114 10:24:05.954279   18834 main.go:134] libmachine: () Calling .GetVersion
	I0114 10:24:05.954743   18834 main.go:134] libmachine: Using API Version  1
	I0114 10:24:05.954755   18834 main.go:134] libmachine: () Calling .SetConfigRaw
	I0114 10:24:05.955094   18834 main.go:134] libmachine: () Calling .GetMachineName
	I0114 10:24:05.955243   18834 main.go:134] libmachine: (functional-102121) Calling .DriverName
	I0114 10:24:05.989376   18834 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0114 10:24:05.991018   18834 start.go:294] selected driver: kvm2
	I0114 10:24:05.991046   18834 start.go:838] validating driver "kvm2" against &{Name:functional-102121 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15235/minikube-v1.28.0-1668700269-15235-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.36-1668787669-15272@sha256:06094fc04b5dc02fbf1e2de7723c2a6db5d24c21fd2ddda91f6daaf29038cd9c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.25.3 ClusterName:functional-102121 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.39.238 Port:8441 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics
-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0114 10:24:05.991184   18834 start.go:849] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0114 10:24:05.993353   18834 out.go:177] 
	W0114 10:24:05.994875   18834 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0114 10:24:05.996225   18834 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:847: (dbg) Run:  out/minikube-linux-amd64 -p functional-102121 status

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:853: (dbg) Run:  out/minikube-linux-amd64 -p functional-102121 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:865: (dbg) Run:  out/minikube-linux-amd64 -p functional-102121 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.99s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd (12.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd
=== PAUSE TestFunctional/parallel/ServiceCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1433: (dbg) Run:  kubectl --context functional-102121 create deployment hello-node --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1439: (dbg) Run:  kubectl --context functional-102121 expose deployment hello-node --type=NodePort --port=8080

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1444: (dbg) TestFunctional/parallel/ServiceCmd: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:342: "hello-node-5fcdfb5cc4-26kpg" [b5e2d70a-7cdc-42dc-98b8-7b8cb487cc51] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
helpers_test.go:342: "hello-node-5fcdfb5cc4-26kpg" [b5e2d70a-7cdc-42dc-98b8-7b8cb487cc51] Running

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1444: (dbg) TestFunctional/parallel/ServiceCmd: app=hello-node healthy within 11.007232124s
functional_test.go:1449: (dbg) Run:  out/minikube-linux-amd64 -p functional-102121 service list

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1463: (dbg) Run:  out/minikube-linux-amd64 -p functional-102121 service --namespace=default --https --url hello-node

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1476: found endpoint: https://192.168.39.238:31693
functional_test.go:1491: (dbg) Run:  out/minikube-linux-amd64 -p functional-102121 service hello-node --url --format={{.IP}}

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1505: (dbg) Run:  out/minikube-linux-amd64 -p functional-102121 service hello-node --url

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1511: found endpoint for hello-node: http://192.168.39.238:31693
--- PASS: TestFunctional/parallel/ServiceCmd (12.87s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (11.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1559: (dbg) Run:  kubectl --context functional-102121 create deployment hello-node-connect --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1565: (dbg) Run:  kubectl --context functional-102121 expose deployment hello-node-connect --type=NodePort --port=8080

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1570: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:342: "hello-node-connect-6458c8fb6f-9b778" [59d85b9c-45e6-4066-b1b5-31819ba39f67] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
helpers_test.go:342: "hello-node-connect-6458c8fb6f-9b778" [59d85b9c-45e6-4066-b1b5-31819ba39f67] Running

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1570: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 11.013651627s
functional_test.go:1579: (dbg) Run:  out/minikube-linux-amd64 -p functional-102121 service hello-node-connect --url
functional_test.go:1585: found endpoint for hello-node-connect: http://192.168.39.238:30519
functional_test.go:1605: http://192.168.39.238:30519: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-6458c8fb6f-9b778

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.238:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.238:30519
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (11.55s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1620: (dbg) Run:  out/minikube-linux-amd64 -p functional-102121 addons list
functional_test.go:1632: (dbg) Run:  out/minikube-linux-amd64 -p functional-102121 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (46.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:342: "storage-provisioner" [fa6c24cd-2e25-4c9c-b448-a7ad00709514] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.023750208s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-102121 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-102121 apply -f testdata/storage-provisioner/pvc.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-102121 get pvc myclaim -o=json

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-102121 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-102121 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:342: "sp-pod" [d791cd79-30d8-4845-8398-3b0288f4c231] Pending
helpers_test.go:342: "sp-pod" [d791cd79-30d8-4845-8398-3b0288f4c231] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:342: "sp-pod" [d791cd79-30d8-4845-8398-3b0288f4c231] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 15.216885198s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-102121 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-102121 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-102121 delete -f testdata/storage-provisioner/pod.yaml: (1.562672039s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-102121 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:342: "sp-pod" [28afa186-240d-43b9-a991-4c6e152eddb3] Pending
helpers_test.go:342: "sp-pod" [28afa186-240d-43b9-a991-4c6e152eddb3] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:342: "sp-pod" [28afa186-240d-43b9-a991-4c6e152eddb3] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 22.012494496s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-102121 exec sp-pod -- ls /tmp/mount
2023/01/14 10:24:42 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (46.78s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1655: (dbg) Run:  out/minikube-linux-amd64 -p functional-102121 ssh "echo hello"

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1672: (dbg) Run:  out/minikube-linux-amd64 -p functional-102121 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p functional-102121 cp testdata/cp-test.txt /home/docker/cp-test.txt

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p functional-102121 ssh -n functional-102121 "sudo cat /home/docker/cp-test.txt"

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p functional-102121 cp functional-102121:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1033184606/001/cp-test.txt

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p functional-102121 ssh -n functional-102121 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.02s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (26.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1720: (dbg) Run:  kubectl --context functional-102121 replace --force -f testdata/mysql.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1726: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:342: "mysql-596b7fcdbf-v8pnx" [7dfc7925-99c0-44a3-884d-e0133b1558c4] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:342: "mysql-596b7fcdbf-v8pnx" [7dfc7925-99c0-44a3-884d-e0133b1558c4] Running

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1726: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 20.01173758s
functional_test.go:1734: (dbg) Run:  kubectl --context functional-102121 exec mysql-596b7fcdbf-v8pnx -- mysql -ppassword -e "show databases;"

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1734: (dbg) Non-zero exit: kubectl --context functional-102121 exec mysql-596b7fcdbf-v8pnx -- mysql -ppassword -e "show databases;": exit status 1 (156.928906ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1734: (dbg) Run:  kubectl --context functional-102121 exec mysql-596b7fcdbf-v8pnx -- mysql -ppassword -e "show databases;"

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1734: (dbg) Non-zero exit: kubectl --context functional-102121 exec mysql-596b7fcdbf-v8pnx -- mysql -ppassword -e "show databases;": exit status 1 (290.983073ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0114 10:24:29.023360   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/addons-100659/client.crt: no such file or directory
E0114 10:24:29.664439   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/addons-100659/client.crt: no such file or directory

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1734: (dbg) Run:  kubectl --context functional-102121 exec mysql-596b7fcdbf-v8pnx -- mysql -ppassword -e "show databases;"

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1734: (dbg) Non-zero exit: kubectl --context functional-102121 exec mysql-596b7fcdbf-v8pnx -- mysql -ppassword -e "show databases;": exit status 1 (219.787138ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0114 10:24:30.945174   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/addons-100659/client.crt: no such file or directory

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1734: (dbg) Run:  kubectl --context functional-102121 exec mysql-596b7fcdbf-v8pnx -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (26.82s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1856: Checking for existence of /etc/test/nested/copy/13921/hosts within VM
functional_test.go:1858: (dbg) Run:  out/minikube-linux-amd64 -p functional-102121 ssh "sudo cat /etc/test/nested/copy/13921/hosts"

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1863: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1899: Checking for existence of /etc/ssl/certs/13921.pem within VM
functional_test.go:1900: (dbg) Run:  out/minikube-linux-amd64 -p functional-102121 ssh "sudo cat /etc/ssl/certs/13921.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1899: Checking for existence of /usr/share/ca-certificates/13921.pem within VM
functional_test.go:1900: (dbg) Run:  out/minikube-linux-amd64 -p functional-102121 ssh "sudo cat /usr/share/ca-certificates/13921.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1899: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1900: (dbg) Run:  out/minikube-linux-amd64 -p functional-102121 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1926: Checking for existence of /etc/ssl/certs/139212.pem within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-102121 ssh "sudo cat /etc/ssl/certs/139212.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1926: Checking for existence of /usr/share/ca-certificates/139212.pem within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-102121 ssh "sudo cat /usr/share/ca-certificates/139212.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1926: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-102121 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.80s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:215: (dbg) Run:  kubectl --context functional-102121 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1954: (dbg) Run:  out/minikube-linux-amd64 -p functional-102121 ssh "sudo systemctl is-active docker"

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1954: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-102121 ssh "sudo systemctl is-active docker": exit status 1 (273.994029ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:1954: (dbg) Run:  out/minikube-linux-amd64 -p functional-102121 ssh "sudo systemctl is-active crio"

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1954: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-102121 ssh "sudo systemctl is-active crio": exit status 1 (247.150055ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2215: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2183: (dbg) Run:  out/minikube-linux-amd64 -p functional-102121 version --short
E0114 10:24:28.702632   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/addons-100659/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2197: (dbg) Run:  out/minikube-linux-amd64 -p functional-102121 version -o=json --components

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2197: (dbg) Done: out/minikube-linux-amd64 -p functional-102121 version -o=json --components: (1.059992912s)
--- PASS: TestFunctional/parallel/Version/components (1.06s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-102121 image ls --format short
functional_test.go:262: (dbg) Stdout: out/minikube-linux-amd64 -p functional-102121 image ls --format short:
registry.k8s.io/pause:3.8
registry.k8s.io/kube-scheduler:v1.25.3
registry.k8s.io/kube-proxy:v1.25.3
registry.k8s.io/kube-controller-manager:v1.25.3
registry.k8s.io/kube-apiserver:v1.25.3
registry.k8s.io/etcd:3.5.4-0
registry.k8s.io/coredns/coredns:v1.9.3
k8s.gcr.io/pause:latest
k8s.gcr.io/pause:3.3
k8s.gcr.io/pause:3.1
k8s.gcr.io/echoserver:1.8
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-102121
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-102121
docker.io/kindest/kindnetd:v20221004-44d545d1
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-102121 image ls --format table
functional_test.go:262: (dbg) Stdout: out/minikube-linux-amd64 -p functional-102121 image ls --format table:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| docker.io/library/mysql                     | 5.7                | sha256:d410f4 | 144MB  |
| gcr.io/google-containers/addon-resizer      | functional-102121  | sha256:ffd4cf | 10.8MB |
| registry.k8s.io/coredns/coredns             | v1.9.3             | sha256:5185b9 | 14.8MB |
| docker.io/kindest/kindnetd                  | v20221004-44d545d1 | sha256:d6e3e2 | 25.8MB |
| docker.io/library/minikube-local-cache-test | functional-102121  | sha256:2a6478 | 1.74kB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:56cc51 | 2.4MB  |
| k8s.gcr.io/echoserver                       | 1.8                | sha256:82e4c8 | 46.2MB |
| k8s.gcr.io/pause                            | 3.1                | sha256:da86e6 | 315kB  |
| k8s.gcr.io/pause                            | 3.3                | sha256:0184c1 | 298kB  |
| k8s.gcr.io/pause                            | latest             | sha256:350b16 | 72.3kB |
| registry.k8s.io/kube-proxy                  | v1.25.3            | sha256:beaaf0 | 20.3MB |
| docker.io/library/nginx                     | latest             | sha256:a99a39 | 56.9MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:6e38f4 | 9.06MB |
| registry.k8s.io/kube-scheduler              | v1.25.3            | sha256:6d23ec | 15.8MB |
| registry.k8s.io/kube-apiserver              | v1.25.3            | sha256:0346db | 34.2MB |
| registry.k8s.io/kube-controller-manager     | v1.25.3            | sha256:603999 | 31.3MB |
| registry.k8s.io/pause                       | 3.8                | sha256:487387 | 311kB  |
| registry.k8s.io/etcd                        | 3.5.4-0            | sha256:a8a176 | 102MB  |
|---------------------------------------------|--------------------|---------------|--------|
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-102121 image ls --format json
functional_test.go:262: (dbg) Stdout: out/minikube-linux-amd64 -p functional-102121 image ls --format json:
[{"id":"sha256:ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-102121"],"size":"10823156"},{"id":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"9058936"},{"id":"sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.1"],"size":"315399"},{"id":"sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.3"],"size":"297686"},{"id":"sha256:5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a","repoDigests":["registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a"],"repoTags":["registry.k8s.io/coredns/co
redns:v1.9.3"],"size":"14837849"},{"id":"sha256:6d23ec0e8b87eaaa698c3425c2c4d25f7329c587e9b39d967ab3f60048983912","repoDigests":["registry.k8s.io/kube-scheduler@sha256:f478aa916568b00269068ff1e9ff742ecc16192eb6e371e30f69f75df904162e"],"repoTags":["registry.k8s.io/kube-scheduler:v1.25.3"],"size":"15798744"},{"id":"sha256:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"75788960"},{"id":"sha256:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"19746404"},{"id":"sha256:beaaf00edd38a6cb405376588e708084376a6786e722231dc8a1482730e0c041","repoDigests":["registry.k8s.io/kube-proxy@sha256:6bf25f038543e1f433cb7f2bdda445ed348c7b9279935ebc2ae4f432308ed82f"],"repoTags":["registry.k8s.io/kube-proxy:v
1.25.3"],"size":"20265805"},{"id":"sha256:d6e3e26021b60c625f0ef5b2dd3f9e22d2d398e05bccc4fdd7d59fbbb6a04d3f","repoDigests":["docker.io/kindest/kindnetd@sha256:273469d84ede51824194a31f6a405e3d3686b8b87cd161ea40f6bc3ff8e04ffe"],"repoTags":["docker.io/kindest/kindnetd:v20221004-44d545d1"],"size":"25830582"},{"id":"sha256:a99a39d070bfd1cb60fe65c45dea3a33764dc00a9546bf8dc46cb5a11b1b50e9","repoDigests":["docker.io/library/nginx@sha256:b8f2383a95879e1ae064940d9a200f67a6c79e710ed82ac42263397367e7cc4e"],"repoTags":["docker.io/library/nginx:latest"],"size":"56882371"},{"id":"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"2395207"},{"id":"sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["k8s.gcr.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf85947596
9"],"repoTags":["k8s.gcr.io/echoserver:1.8"],"size":"46237695"},{"id":"sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["k8s.gcr.io/pause:latest"],"size":"72306"},{"id":"sha256:0346dbd74bcb9485bb4da1b33027094d79488470d8d1b9baa4d927db564e4fe0","repoDigests":["registry.k8s.io/kube-apiserver@sha256:4188262a351f156e8027ff81693d771c35b34b668cbd61e59c4a4490dd5c08f3"],"repoTags":["registry.k8s.io/kube-apiserver:v1.25.3"],"size":"34238163"},{"id":"sha256:60399923127581086e9029f30a0c9e3c88708efa8fc05d22d5e33887e7c0310a","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:d3a06262256f3e7578d5f77df137a8cdf58f9f498f35b5b56d116e8a7e31dc91"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.25.3"],"size":"31261869"},{"id":"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517","repoDigests":["registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d"],"repoTags":["registry.k8s.io/pause:3.8"],"size":"3
11286"},{"id":"sha256:2a6478c5e826707283d5ecab77efc8f288e166a30dd29d1f0facb77aae76a8c9","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-102121"],"size":"1737"},{"id":"sha256:d410f4167eea912908b2f9bcc24eff870cb3c131dfb755088b79a4188bfeb40f","repoDigests":["docker.io/library/mysql@sha256:6306f106a056e24b3a2582a59a4c84cd199907f826eff27df36406f227cd9a7d"],"repoTags":["docker.io/library/mysql:5.7"],"size":"144290330"},{"id":"sha256:a8a176a5d5d698f9409dc246f81fa69d37d4a2f4132ba5e62e72a78476b27f66","repoDigests":["registry.k8s.io/etcd@sha256:6f72b851544986cb0921b53ea655ec04c36131248f16d4ad110cb3ca0c369dc1"],"repoTags":["registry.k8s.io/etcd:3.5.4-0"],"size":"102157811"}]
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-102121 image ls --format yaml

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:262: (dbg) Stdout: out/minikube-linux-amd64 -p functional-102121 image ls --format yaml:
- id: sha256:beaaf00edd38a6cb405376588e708084376a6786e722231dc8a1482730e0c041
repoDigests:
- registry.k8s.io/kube-proxy@sha256:6bf25f038543e1f433cb7f2bdda445ed348c7b9279935ebc2ae4f432308ed82f
repoTags:
- registry.k8s.io/kube-proxy:v1.25.3
size: "20265805"
- id: sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- k8s.gcr.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- k8s.gcr.io/echoserver:1.8
size: "46237695"
- id: sha256:5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a
repoTags:
- registry.k8s.io/coredns/coredns:v1.9.3
size: "14837849"
- id: sha256:a8a176a5d5d698f9409dc246f81fa69d37d4a2f4132ba5e62e72a78476b27f66
repoDigests:
- registry.k8s.io/etcd@sha256:6f72b851544986cb0921b53ea655ec04c36131248f16d4ad110cb3ca0c369dc1
repoTags:
- registry.k8s.io/etcd:3.5.4-0
size: "102157811"
- id: sha256:0346dbd74bcb9485bb4da1b33027094d79488470d8d1b9baa4d927db564e4fe0
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:4188262a351f156e8027ff81693d771c35b34b668cbd61e59c4a4490dd5c08f3
repoTags:
- registry.k8s.io/kube-apiserver:v1.25.3
size: "34238163"
- id: sha256:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "19746404"
- id: sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "9058936"
- id: sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- k8s.gcr.io/pause:latest
size: "72306"
- id: sha256:60399923127581086e9029f30a0c9e3c88708efa8fc05d22d5e33887e7c0310a
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:d3a06262256f3e7578d5f77df137a8cdf58f9f498f35b5b56d116e8a7e31dc91
repoTags:
- registry.k8s.io/kube-controller-manager:v1.25.3
size: "31261869"
- id: sha256:6d23ec0e8b87eaaa698c3425c2c4d25f7329c587e9b39d967ab3f60048983912
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:f478aa916568b00269068ff1e9ff742ecc16192eb6e371e30f69f75df904162e
repoTags:
- registry.k8s.io/kube-scheduler:v1.25.3
size: "15798744"
- id: sha256:2a6478c5e826707283d5ecab77efc8f288e166a30dd29d1f0facb77aae76a8c9
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-102121
size: "1737"
- id: sha256:a99a39d070bfd1cb60fe65c45dea3a33764dc00a9546bf8dc46cb5a11b1b50e9
repoDigests:
- docker.io/library/nginx@sha256:b8f2383a95879e1ae064940d9a200f67a6c79e710ed82ac42263397367e7cc4e
repoTags:
- docker.io/library/nginx:latest
size: "56882371"
- id: sha256:ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-102121
size: "10823156"
- id: sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.3
size: "297686"
- id: sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517
repoDigests:
- registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d
repoTags:
- registry.k8s.io/pause:3.8
size: "311286"
- id: sha256:d6e3e26021b60c625f0ef5b2dd3f9e22d2d398e05bccc4fdd7d59fbbb6a04d3f
repoDigests:
- docker.io/kindest/kindnetd@sha256:273469d84ede51824194a31f6a405e3d3686b8b87cd161ea40f6bc3ff8e04ffe
repoTags:
- docker.io/kindest/kindnetd:v20221004-44d545d1
size: "25830582"
- id: sha256:d410f4167eea912908b2f9bcc24eff870cb3c131dfb755088b79a4188bfeb40f
repoDigests:
- docker.io/library/mysql@sha256:6306f106a056e24b3a2582a59a4c84cd199907f826eff27df36406f227cd9a7d
repoTags:
- docker.io/library/mysql:5.7
size: "144290330"
- id: sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "2395207"
- id: sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.1
size: "315399"

                                                
                                                
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p functional-102121 ssh pgrep buildkitd

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:304: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-102121 ssh pgrep buildkitd: exit status 1 (245.958671ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:311: (dbg) Run:  out/minikube-linux-amd64 -p functional-102121 image build -t localhost/my-image:functional-102121 testdata/build
E0114 10:24:33.505354   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/addons-100659/client.crt: no such file or directory

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:311: (dbg) Done: out/minikube-linux-amd64 -p functional-102121 image build -t localhost/my-image:functional-102121 testdata/build: (3.956847836s)
functional_test.go:319: (dbg) Stderr: out/minikube-linux-amd64 -p functional-102121 image build -t localhost/my-image:functional-102121 testdata/build:
#1 [internal] load .dockerignore
#1 transferring context:
#1 transferring context: 2B done
#1 DONE 0.1s

                                                
                                                
#2 [internal] load build definition from Dockerfile
#2 transferring dockerfile: 97B done
#2 DONE 0.1s

                                                
                                                
#3 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#3 DONE 1.2s

                                                
                                                
#4 [1/3] FROM gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#4 resolve gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#4 resolve gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.1s done
#4 DONE 0.1s

                                                
                                                
#5 [internal] load build context
#5 transferring context: 62B done
#5 DONE 0.1s

                                                
                                                
#4 [1/3] FROM gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#4 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.1s done
#4 DONE 0.4s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 1.0s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers
#8 exporting layers 0.2s done
#8 exporting manifest sha256:bf6f526f1fd5d0b4a81f3257e96b3d5f0ade57418a50a9b15163ab4ad9dc47bc
#8 exporting manifest sha256:bf6f526f1fd5d0b4a81f3257e96b3d5f0ade57418a50a9b15163ab4ad9dc47bc 0.0s done
#8 exporting config sha256:9a3d15892c5e7b2b1a34956fd8b997f2d143f36d0a1c5366dfe29a38e1b988f8 0.0s done
#8 naming to localhost/my-image:functional-102121 done
#8 DONE 0.2s
functional_test.go:444: (dbg) Run:  out/minikube-linux-amd64 -p functional-102121 image ls
E0114 10:24:38.625889   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/addons-100659/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.43s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:338: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/Setup
functional_test.go:338: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.303297843s)
functional_test.go:343: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-102121
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.33s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-amd64 profile list

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1311: Took "264.549628ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-amd64 profile list -l

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: Took "75.417614ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-amd64 profile list -o json

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1362: Took "310.408598ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1375: Took "85.665166ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p functional-102121 image load --daemon gcr.io/google-containers/addon-resizer:functional-102121

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:351: (dbg) Done: out/minikube-linux-amd64 -p functional-102121 image load --daemon gcr.io/google-containers/addon-resizer:functional-102121: (3.85614445s)
functional_test.go:444: (dbg) Run:  out/minikube-linux-amd64 -p functional-102121 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (4.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:361: (dbg) Run:  out/minikube-linux-amd64 -p functional-102121 image load --daemon gcr.io/google-containers/addon-resizer:functional-102121

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:361: (dbg) Done: out/minikube-linux-amd64 -p functional-102121 image load --daemon gcr.io/google-containers/addon-resizer:functional-102121: (3.863573973s)
functional_test.go:444: (dbg) Run:  out/minikube-linux-amd64 -p functional-102121 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (4.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:231: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:231: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.222586868s)
functional_test.go:236: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-102121
functional_test.go:241: (dbg) Run:  out/minikube-linux-amd64 -p functional-102121 image load --daemon gcr.io/google-containers/addon-resizer:functional-102121

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:241: (dbg) Done: out/minikube-linux-amd64 -p functional-102121 image load --daemon gcr.io/google-containers/addon-resizer:functional-102121: (4.687326492s)
functional_test.go:444: (dbg) Run:  out/minikube-linux-amd64 -p functional-102121 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.26s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2046: (dbg) Run:  out/minikube-linux-amd64 -p functional-102121 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2046: (dbg) Run:  out/minikube-linux-amd64 -p functional-102121 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2046: (dbg) Run:  out/minikube-linux-amd64 -p functional-102121 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (20.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:66: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-102121 /tmp/TestFunctionalparallelMountCmdany-port1958101934/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:100: wrote "test-1673691845825008342" to /tmp/TestFunctionalparallelMountCmdany-port1958101934/001/created-by-test
functional_test_mount_test.go:100: wrote "test-1673691845825008342" to /tmp/TestFunctionalparallelMountCmdany-port1958101934/001/created-by-test-removed-by-pod
functional_test_mount_test.go:100: wrote "test-1673691845825008342" to /tmp/TestFunctionalparallelMountCmdany-port1958101934/001/test-1673691845825008342
functional_test_mount_test.go:108: (dbg) Run:  out/minikube-linux-amd64 -p functional-102121 ssh "findmnt -T /mount-9p | grep 9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:108: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-102121 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (294.700865ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:108: (dbg) Run:  out/minikube-linux-amd64 -p functional-102121 ssh "findmnt -T /mount-9p | grep 9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:122: (dbg) Run:  out/minikube-linux-amd64 -p functional-102121 ssh -- ls -la /mount-9p

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:126: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jan 14 10:24 created-by-test
-rw-r--r-- 1 docker docker 24 Jan 14 10:24 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jan 14 10:24 test-1673691845825008342
functional_test_mount_test.go:130: (dbg) Run:  out/minikube-linux-amd64 -p functional-102121 ssh cat /mount-9p/test-1673691845825008342

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:141: (dbg) Run:  kubectl --context functional-102121 replace --force -f testdata/busybox-mount-test.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:146: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:342: "busybox-mount" [71d9aec1-11e0-4965-9f71-94455ae33cad] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:342: "busybox-mount" [71d9aec1-11e0-4965-9f71-94455ae33cad] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:342: "busybox-mount" [71d9aec1-11e0-4965-9f71-94455ae33cad] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:342: "busybox-mount" [71d9aec1-11e0-4965-9f71-94455ae33cad] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:146: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 18.013008249s
functional_test_mount_test.go:162: (dbg) Run:  kubectl --context functional-102121 logs busybox-mount
functional_test_mount_test.go:174: (dbg) Run:  out/minikube-linux-amd64 -p functional-102121 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:174: (dbg) Run:  out/minikube-linux-amd64 -p functional-102121 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:83: (dbg) Run:  out/minikube-linux-amd64 -p functional-102121 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:87: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-102121 /tmp/TestFunctionalparallelMountCmdany-port1958101934/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (20.83s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:376: (dbg) Run:  out/minikube-linux-amd64 -p functional-102121 image save gcr.io/google-containers/addon-resizer:functional-102121 /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:376: (dbg) Done: out/minikube-linux-amd64 -p functional-102121 image save gcr.io/google-containers/addon-resizer:functional-102121 /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar: (1.391184428s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:388: (dbg) Run:  out/minikube-linux-amd64 -p functional-102121 image rm gcr.io/google-containers/addon-resizer:functional-102121
functional_test.go:444: (dbg) Run:  out/minikube-linux-amd64 -p functional-102121 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:405: (dbg) Run:  out/minikube-linux-amd64 -p functional-102121 image load /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar
functional_test.go:405: (dbg) Done: out/minikube-linux-amd64 -p functional-102121 image load /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar: (1.542840871s)
functional_test.go:444: (dbg) Run:  out/minikube-linux-amd64 -p functional-102121 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.83s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:415: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-102121
functional_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p functional-102121 image save --daemon gcr.io/google-containers/addon-resizer:functional-102121
functional_test.go:420: (dbg) Done: out/minikube-linux-amd64 -p functional-102121 image save --daemon gcr.io/google-containers/addon-resizer:functional-102121: (1.574846727s)
functional_test.go:425: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-102121
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.63s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:206: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-102121 /tmp/TestFunctionalparallelMountCmdspecific-port3791099418/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:236: (dbg) Run:  out/minikube-linux-amd64 -p functional-102121 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:236: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-102121 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (240.69121ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:236: (dbg) Run:  out/minikube-linux-amd64 -p functional-102121 ssh "findmnt -T /mount-9p | grep 9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:250: (dbg) Run:  out/minikube-linux-amd64 -p functional-102121 ssh -- ls -la /mount-9p
functional_test_mount_test.go:254: guest mount directory contents
total 0
functional_test_mount_test.go:256: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-102121 /tmp/TestFunctionalparallelMountCmdspecific-port3791099418/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:257: reading mount text
functional_test_mount_test.go:271: done reading mount text
functional_test_mount_test.go:223: (dbg) Run:  out/minikube-linux-amd64 -p functional-102121 ssh "sudo umount -f /mount-9p"
E0114 10:24:28.385140   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/addons-100659/client.crt: no such file or directory
E0114 10:24:28.390801   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/addons-100659/client.crt: no such file or directory
E0114 10:24:28.401059   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/addons-100659/client.crt: no such file or directory
E0114 10:24:28.421308   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/addons-100659/client.crt: no such file or directory
E0114 10:24:28.461609   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/addons-100659/client.crt: no such file or directory
E0114 10:24:28.541986   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/addons-100659/client.crt: no such file or directory
functional_test_mount_test.go:223: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-102121 ssh "sudo umount -f /mount-9p": exit status 1 (244.298491ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:225: "out/minikube-linux-amd64 -p functional-102121 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:227: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-102121 /tmp/TestFunctionalparallelMountCmdspecific-port3791099418/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.01s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.08s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:186: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:186: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-102121
--- PASS: TestFunctional/delete_addon-resizer_images (0.08s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:194: (dbg) Run:  docker rmi -f localhost/my-image:functional-102121
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:202: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-102121
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (99.34s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-102444 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd
E0114 10:24:48.866797   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/addons-100659/client.crt: no such file or directory
E0114 10:25:09.347293   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/addons-100659/client.crt: no such file or directory
E0114 10:25:50.307766   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/addons-100659/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-102444 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (1m39.344412185s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (99.34s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (11.85s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-102444 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-102444 addons enable ingress --alsologtostderr -v=5: (11.845375655s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (11.85s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.39s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-102444 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.39s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (45.56s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:169: (dbg) Run:  kubectl --context ingress-addon-legacy-102444 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:169: (dbg) Done: kubectl --context ingress-addon-legacy-102444 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (17.082415651s)
addons_test.go:189: (dbg) Run:  kubectl --context ingress-addon-legacy-102444 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:202: (dbg) Run:  kubectl --context ingress-addon-legacy-102444 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:207: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:342: "nginx" [760d3ba7-db1a-491f-b6dc-e8774dac56f5] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:342: "nginx" [760d3ba7-db1a-491f-b6dc-e8774dac56f5] Running
addons_test.go:207: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 9.017976025s
addons_test.go:219: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-102444 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:243: (dbg) Run:  kubectl --context ingress-addon-legacy-102444 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-102444 ip
addons_test.go:254: (dbg) Run:  nslookup hello-john.test 192.168.39.8
addons_test.go:263: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-102444 addons disable ingress-dns --alsologtostderr -v=1
E0114 10:27:12.228601   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/addons-100659/client.crt: no such file or directory
addons_test.go:263: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-102444 addons disable ingress-dns --alsologtostderr -v=1: (10.851447128s)
addons_test.go:268: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-102444 addons disable ingress --alsologtostderr -v=1
addons_test.go:268: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-102444 addons disable ingress --alsologtostderr -v=1: (7.376218638s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddons (45.56s)

                                                
                                    
x
+
TestJSONOutput/start/Command (79.42s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-102722 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=containerd
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-102722 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=containerd: (1m19.415583662s)
--- PASS: TestJSONOutput/start/Command (79.42s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.62s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-102722 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.62s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.59s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-102722 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.59s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (2.11s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-102722 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-102722 --output=json --user=testUser: (2.109632333s)
--- PASS: TestJSONOutput/stop/Command (2.11s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.26s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:149: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-102845 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-102845 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (87.210733ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"0572544b-6f3b-4e84-9ce7-0ee25ef69eb8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-102845] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"99bf63eb-8e55-4b02-bbde-c2208e540f0a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=15642"}}
	{"specversion":"1.0","id":"4e4389a6-86cc-4f5d-a775-95abb2722cd4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"54a0ab46-8fc4-42b6-8d58-e10ee37094a0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/15642-7076/kubeconfig"}}
	{"specversion":"1.0","id":"b38ae251-7638-48c8-bcd9-1e0454106e1f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/15642-7076/.minikube"}}
	{"specversion":"1.0","id":"680b0a6e-db52-403b-ab45-e73c57a9ba05","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"55f08c4e-f10d-4185-ae0b-b5326146e3cf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-102845" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-102845
--- PASS: TestErrorJSONOutput (0.26s)

                                                
                                    
x
+
TestMainNoArgs (0.07s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.07s)

                                                
                                    
x
+
TestMinikubeProfile (112.49s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-102846 --driver=kvm2  --container-runtime=containerd
E0114 10:28:52.032841   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/functional-102121/client.crt: no such file or directory
E0114 10:28:52.038121   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/functional-102121/client.crt: no such file or directory
E0114 10:28:52.048386   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/functional-102121/client.crt: no such file or directory
E0114 10:28:52.068690   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/functional-102121/client.crt: no such file or directory
E0114 10:28:52.109002   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/functional-102121/client.crt: no such file or directory
E0114 10:28:52.189323   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/functional-102121/client.crt: no such file or directory
E0114 10:28:52.349741   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/functional-102121/client.crt: no such file or directory
E0114 10:28:52.670325   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/functional-102121/client.crt: no such file or directory
E0114 10:28:53.311316   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/functional-102121/client.crt: no such file or directory
E0114 10:28:54.591638   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/functional-102121/client.crt: no such file or directory
E0114 10:28:57.152819   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/functional-102121/client.crt: no such file or directory
E0114 10:29:02.273913   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/functional-102121/client.crt: no such file or directory
E0114 10:29:12.514706   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/functional-102121/client.crt: no such file or directory
E0114 10:29:28.385554   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/addons-100659/client.crt: no such file or directory
E0114 10:29:32.994957   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/functional-102121/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-102846 --driver=kvm2  --container-runtime=containerd: (55.284077122s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-102846 --driver=kvm2  --container-runtime=containerd
E0114 10:29:56.069667   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/addons-100659/client.crt: no such file or directory
E0114 10:30:13.956018   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/functional-102121/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-102846 --driver=kvm2  --container-runtime=containerd: (54.151568403s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-102846
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-102846
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-102846" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-102846
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-102846: (1.031879706s)
helpers_test.go:175: Cleaning up "first-102846" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-102846
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-102846: (1.023644324s)
--- PASS: TestMinikubeProfile (112.49s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (27.11s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-103038 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-103038 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (26.107500323s)
--- PASS: TestMountStart/serial/StartWithMountFirst (27.11s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.42s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-103038 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-103038 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.42s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (27.41s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-103038 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-103038 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (26.41166511s)
--- PASS: TestMountStart/serial/StartWithMountSecond (27.41s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.41s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-103038 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-103038 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.41s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.68s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-103038 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.68s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.41s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-103038 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-103038 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.41s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.15s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-103038
E0114 10:31:35.877162   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/functional-102121/client.crt: no such file or directory
E0114 10:31:36.136660   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/ingress-addon-legacy-102444/client.crt: no such file or directory
E0114 10:31:36.141914   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/ingress-addon-legacy-102444/client.crt: no such file or directory
E0114 10:31:36.152158   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/ingress-addon-legacy-102444/client.crt: no such file or directory
E0114 10:31:36.172425   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/ingress-addon-legacy-102444/client.crt: no such file or directory
E0114 10:31:36.212682   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/ingress-addon-legacy-102444/client.crt: no such file or directory
E0114 10:31:36.292992   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/ingress-addon-legacy-102444/client.crt: no such file or directory
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-103038: (1.148841958s)
--- PASS: TestMountStart/serial/Stop (1.15s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (22.2s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-103038
E0114 10:31:36.454024   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/ingress-addon-legacy-102444/client.crt: no such file or directory
E0114 10:31:36.774648   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/ingress-addon-legacy-102444/client.crt: no such file or directory
E0114 10:31:37.415037   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/ingress-addon-legacy-102444/client.crt: no such file or directory
E0114 10:31:38.696157   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/ingress-addon-legacy-102444/client.crt: no such file or directory
E0114 10:31:41.257988   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/ingress-addon-legacy-102444/client.crt: no such file or directory
E0114 10:31:46.378756   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/ingress-addon-legacy-102444/client.crt: no such file or directory
E0114 10:31:56.619166   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/ingress-addon-legacy-102444/client.crt: no such file or directory
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-103038: (21.199800894s)
--- PASS: TestMountStart/serial/RestartStopped (22.20s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.42s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-103038 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-103038 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.42s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (183.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-103159 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E0114 10:32:17.099306   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/ingress-addon-legacy-102444/client.crt: no such file or directory
E0114 10:32:58.060187   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/ingress-addon-legacy-102444/client.crt: no such file or directory
E0114 10:33:52.030282   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/functional-102121/client.crt: no such file or directory
E0114 10:34:19.717898   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/functional-102121/client.crt: no such file or directory
E0114 10:34:19.981272   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/ingress-addon-legacy-102444/client.crt: no such file or directory
E0114 10:34:28.384884   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/addons-100659/client.crt: no such file or directory
multinode_test.go:83: (dbg) Done: out/minikube-linux-amd64 start -p multinode-103159 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (3m3.340127638s)
multinode_test.go:89: (dbg) Run:  out/minikube-linux-amd64 -p multinode-103159 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (183.80s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-103159 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-103159 -- rollout status deployment/busybox
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-103159 -- rollout status deployment/busybox: (3.202656883s)
multinode_test.go:490: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-103159 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:502: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-103159 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:510: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-103159 -- exec busybox-65db55d5d6-h98qg -- nslookup kubernetes.io
multinode_test.go:510: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-103159 -- exec busybox-65db55d5d6-wgpn6 -- nslookup kubernetes.io
multinode_test.go:520: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-103159 -- exec busybox-65db55d5d6-h98qg -- nslookup kubernetes.default
multinode_test.go:520: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-103159 -- exec busybox-65db55d5d6-wgpn6 -- nslookup kubernetes.default
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-103159 -- exec busybox-65db55d5d6-h98qg -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-103159 -- exec busybox-65db55d5d6-wgpn6 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.00s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:538: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-103159 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-103159 -- exec busybox-65db55d5d6-h98qg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-103159 -- exec busybox-65db55d5d6-h98qg -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-103159 -- exec busybox-65db55d5d6-wgpn6 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-103159 -- exec busybox-65db55d5d6-wgpn6 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.92s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (60.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:108: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-103159 -v 3 --alsologtostderr
multinode_test.go:108: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-103159 -v 3 --alsologtostderr: (59.722519514s)
multinode_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p multinode-103159 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (60.34s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:130: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.25s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (8.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p multinode-103159 status --output json --alsologtostderr
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-103159 cp testdata/cp-test.txt multinode-103159:/home/docker/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-103159 ssh -n multinode-103159 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-103159 cp multinode-103159:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2182500412/001/cp-test_multinode-103159.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-103159 ssh -n multinode-103159 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-103159 cp multinode-103159:/home/docker/cp-test.txt multinode-103159-m02:/home/docker/cp-test_multinode-103159_multinode-103159-m02.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-103159 ssh -n multinode-103159 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-103159 ssh -n multinode-103159-m02 "sudo cat /home/docker/cp-test_multinode-103159_multinode-103159-m02.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-103159 cp multinode-103159:/home/docker/cp-test.txt multinode-103159-m03:/home/docker/cp-test_multinode-103159_multinode-103159-m03.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-103159 ssh -n multinode-103159 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-103159 ssh -n multinode-103159-m03 "sudo cat /home/docker/cp-test_multinode-103159_multinode-103159-m03.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-103159 cp testdata/cp-test.txt multinode-103159-m02:/home/docker/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-103159 ssh -n multinode-103159-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-103159 cp multinode-103159-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2182500412/001/cp-test_multinode-103159-m02.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-103159 ssh -n multinode-103159-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-103159 cp multinode-103159-m02:/home/docker/cp-test.txt multinode-103159:/home/docker/cp-test_multinode-103159-m02_multinode-103159.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-103159 ssh -n multinode-103159-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-103159 ssh -n multinode-103159 "sudo cat /home/docker/cp-test_multinode-103159-m02_multinode-103159.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-103159 cp multinode-103159-m02:/home/docker/cp-test.txt multinode-103159-m03:/home/docker/cp-test_multinode-103159-m02_multinode-103159-m03.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-103159 ssh -n multinode-103159-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-103159 ssh -n multinode-103159-m03 "sudo cat /home/docker/cp-test_multinode-103159-m02_multinode-103159-m03.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-103159 cp testdata/cp-test.txt multinode-103159-m03:/home/docker/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-103159 ssh -n multinode-103159-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-103159 cp multinode-103159-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2182500412/001/cp-test_multinode-103159-m03.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-103159 ssh -n multinode-103159-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-103159 cp multinode-103159-m03:/home/docker/cp-test.txt multinode-103159:/home/docker/cp-test_multinode-103159-m03_multinode-103159.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-103159 ssh -n multinode-103159-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-103159 ssh -n multinode-103159 "sudo cat /home/docker/cp-test_multinode-103159-m03_multinode-103159.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-103159 cp multinode-103159-m03:/home/docker/cp-test.txt multinode-103159-m02:/home/docker/cp-test_multinode-103159-m03_multinode-103159-m02.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-103159 ssh -n multinode-103159-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-103159 ssh -n multinode-103159-m02 "sudo cat /home/docker/cp-test_multinode-103159-m03_multinode-103159-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (8.15s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:208: (dbg) Run:  out/minikube-linux-amd64 -p multinode-103159 node stop m03
multinode_test.go:208: (dbg) Done: out/minikube-linux-amd64 -p multinode-103159 node stop m03: (1.32025469s)
multinode_test.go:214: (dbg) Run:  out/minikube-linux-amd64 -p multinode-103159 status
multinode_test.go:214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-103159 status: exit status 7 (462.518969ms)

                                                
                                                
-- stdout --
	multinode-103159
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-103159-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-103159-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:221: (dbg) Run:  out/minikube-linux-amd64 -p multinode-103159 status --alsologtostderr
multinode_test.go:221: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-103159 status --alsologtostderr: exit status 7 (450.565898ms)

                                                
                                                
-- stdout --
	multinode-103159
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-103159-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-103159-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0114 10:36:20.161481   25093 out.go:296] Setting OutFile to fd 1 ...
	I0114 10:36:20.161575   25093 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0114 10:36:20.161579   25093 out.go:309] Setting ErrFile to fd 2...
	I0114 10:36:20.161584   25093 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0114 10:36:20.161686   25093 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15642-7076/.minikube/bin
	I0114 10:36:20.161832   25093 out.go:303] Setting JSON to false
	I0114 10:36:20.161857   25093 mustload.go:65] Loading cluster: multinode-103159
	I0114 10:36:20.161955   25093 notify.go:220] Checking for updates...
	I0114 10:36:20.162195   25093 config.go:180] Loaded profile config "multinode-103159": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.25.3
	I0114 10:36:20.162211   25093 status.go:255] checking status of multinode-103159 ...
	I0114 10:36:20.162525   25093 main.go:134] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0114 10:36:20.162582   25093 main.go:134] libmachine: Launching plugin server for driver kvm2
	I0114 10:36:20.177909   25093 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:33281
	I0114 10:36:20.178323   25093 main.go:134] libmachine: () Calling .GetVersion
	I0114 10:36:20.178885   25093 main.go:134] libmachine: Using API Version  1
	I0114 10:36:20.178909   25093 main.go:134] libmachine: () Calling .SetConfigRaw
	I0114 10:36:20.179232   25093 main.go:134] libmachine: () Calling .GetMachineName
	I0114 10:36:20.179412   25093 main.go:134] libmachine: (multinode-103159) Calling .GetState
	I0114 10:36:20.180868   25093 status.go:330] multinode-103159 host status = "Running" (err=<nil>)
	I0114 10:36:20.180894   25093 host.go:66] Checking if "multinode-103159" exists ...
	I0114 10:36:20.181186   25093 main.go:134] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0114 10:36:20.181222   25093 main.go:134] libmachine: Launching plugin server for driver kvm2
	I0114 10:36:20.195987   25093 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:42023
	I0114 10:36:20.196322   25093 main.go:134] libmachine: () Calling .GetVersion
	I0114 10:36:20.196705   25093 main.go:134] libmachine: Using API Version  1
	I0114 10:36:20.196728   25093 main.go:134] libmachine: () Calling .SetConfigRaw
	I0114 10:36:20.197011   25093 main.go:134] libmachine: () Calling .GetMachineName
	I0114 10:36:20.197195   25093 main.go:134] libmachine: (multinode-103159) Calling .GetIP
	I0114 10:36:20.199829   25093 main.go:134] libmachine: (multinode-103159) DBG | domain multinode-103159 has defined MAC address 52:54:00:78:57:98 in network mk-multinode-103159
	I0114 10:36:20.200245   25093 main.go:134] libmachine: (multinode-103159) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:57:98", ip: ""} in network mk-multinode-103159: {Iface:virbr1 ExpiryTime:2023-01-14 11:32:14 +0000 UTC Type:0 Mac:52:54:00:78:57:98 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:multinode-103159 Clientid:01:52:54:00:78:57:98}
	I0114 10:36:20.200275   25093 main.go:134] libmachine: (multinode-103159) DBG | domain multinode-103159 has defined IP address 192.168.39.217 and MAC address 52:54:00:78:57:98 in network mk-multinode-103159
	I0114 10:36:20.200381   25093 host.go:66] Checking if "multinode-103159" exists ...
	I0114 10:36:20.200637   25093 main.go:134] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0114 10:36:20.200660   25093 main.go:134] libmachine: Launching plugin server for driver kvm2
	I0114 10:36:20.214807   25093 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:40463
	I0114 10:36:20.215164   25093 main.go:134] libmachine: () Calling .GetVersion
	I0114 10:36:20.215581   25093 main.go:134] libmachine: Using API Version  1
	I0114 10:36:20.215595   25093 main.go:134] libmachine: () Calling .SetConfigRaw
	I0114 10:36:20.215917   25093 main.go:134] libmachine: () Calling .GetMachineName
	I0114 10:36:20.216096   25093 main.go:134] libmachine: (multinode-103159) Calling .DriverName
	I0114 10:36:20.216303   25093 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0114 10:36:20.216338   25093 main.go:134] libmachine: (multinode-103159) Calling .GetSSHHostname
	I0114 10:36:20.218480   25093 main.go:134] libmachine: (multinode-103159) DBG | domain multinode-103159 has defined MAC address 52:54:00:78:57:98 in network mk-multinode-103159
	I0114 10:36:20.218821   25093 main.go:134] libmachine: (multinode-103159) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:57:98", ip: ""} in network mk-multinode-103159: {Iface:virbr1 ExpiryTime:2023-01-14 11:32:14 +0000 UTC Type:0 Mac:52:54:00:78:57:98 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:multinode-103159 Clientid:01:52:54:00:78:57:98}
	I0114 10:36:20.218851   25093 main.go:134] libmachine: (multinode-103159) DBG | domain multinode-103159 has defined IP address 192.168.39.217 and MAC address 52:54:00:78:57:98 in network mk-multinode-103159
	I0114 10:36:20.218937   25093 main.go:134] libmachine: (multinode-103159) Calling .GetSSHPort
	I0114 10:36:20.219112   25093 main.go:134] libmachine: (multinode-103159) Calling .GetSSHKeyPath
	I0114 10:36:20.219266   25093 main.go:134] libmachine: (multinode-103159) Calling .GetSSHUsername
	I0114 10:36:20.219373   25093 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15642-7076/.minikube/machines/multinode-103159/id_rsa Username:docker}
	I0114 10:36:20.309948   25093 ssh_runner.go:195] Run: systemctl --version
	I0114 10:36:20.315898   25093 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0114 10:36:20.329383   25093 kubeconfig.go:92] found "multinode-103159" server: "https://192.168.39.217:8443"
	I0114 10:36:20.329416   25093 api_server.go:165] Checking apiserver status ...
	I0114 10:36:20.329450   25093 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0114 10:36:20.342836   25093 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1079/cgroup
	I0114 10:36:20.351961   25093 api_server.go:181] apiserver freezer: "7:freezer:/kubepods/burstable/podc5623f7f2e78dbaf6699a36ad29bbeee/83ccd1542a4f4b153c52d3fa08d3fccd99bc4842518acf484a4723bdd3cab8bb"
	I0114 10:36:20.352020   25093 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/podc5623f7f2e78dbaf6699a36ad29bbeee/83ccd1542a4f4b153c52d3fa08d3fccd99bc4842518acf484a4723bdd3cab8bb/freezer.state
	I0114 10:36:20.360661   25093 api_server.go:203] freezer state: "THAWED"
	I0114 10:36:20.360688   25093 api_server.go:252] Checking apiserver healthz at https://192.168.39.217:8443/healthz ...
	I0114 10:36:20.366286   25093 api_server.go:278] https://192.168.39.217:8443/healthz returned 200:
	ok
	I0114 10:36:20.366304   25093 status.go:421] multinode-103159 apiserver status = Running (err=<nil>)
	I0114 10:36:20.366312   25093 status.go:257] multinode-103159 status: &{Name:multinode-103159 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0114 10:36:20.366325   25093 status.go:255] checking status of multinode-103159-m02 ...
	I0114 10:36:20.366585   25093 main.go:134] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0114 10:36:20.366609   25093 main.go:134] libmachine: Launching plugin server for driver kvm2
	I0114 10:36:20.381315   25093 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:45301
	I0114 10:36:20.381784   25093 main.go:134] libmachine: () Calling .GetVersion
	I0114 10:36:20.382233   25093 main.go:134] libmachine: Using API Version  1
	I0114 10:36:20.382253   25093 main.go:134] libmachine: () Calling .SetConfigRaw
	I0114 10:36:20.382532   25093 main.go:134] libmachine: () Calling .GetMachineName
	I0114 10:36:20.382771   25093 main.go:134] libmachine: (multinode-103159-m02) Calling .GetState
	I0114 10:36:20.384300   25093 status.go:330] multinode-103159-m02 host status = "Running" (err=<nil>)
	I0114 10:36:20.384321   25093 host.go:66] Checking if "multinode-103159-m02" exists ...
	I0114 10:36:20.384614   25093 main.go:134] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0114 10:36:20.384637   25093 main.go:134] libmachine: Launching plugin server for driver kvm2
	I0114 10:36:20.399282   25093 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:40337
	I0114 10:36:20.399699   25093 main.go:134] libmachine: () Calling .GetVersion
	I0114 10:36:20.400149   25093 main.go:134] libmachine: Using API Version  1
	I0114 10:36:20.400174   25093 main.go:134] libmachine: () Calling .SetConfigRaw
	I0114 10:36:20.400507   25093 main.go:134] libmachine: () Calling .GetMachineName
	I0114 10:36:20.400691   25093 main.go:134] libmachine: (multinode-103159-m02) Calling .GetIP
	I0114 10:36:20.403851   25093 main.go:134] libmachine: (multinode-103159-m02) DBG | domain multinode-103159-m02 has defined MAC address 52:54:00:52:64:50 in network mk-multinode-103159
	I0114 10:36:20.404251   25093 main.go:134] libmachine: (multinode-103159-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:64:50", ip: ""} in network mk-multinode-103159: {Iface:virbr1 ExpiryTime:2023-01-14 11:33:31 +0000 UTC Type:0 Mac:52:54:00:52:64:50 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-103159-m02 Clientid:01:52:54:00:52:64:50}
	I0114 10:36:20.404282   25093 main.go:134] libmachine: (multinode-103159-m02) DBG | domain multinode-103159-m02 has defined IP address 192.168.39.206 and MAC address 52:54:00:52:64:50 in network mk-multinode-103159
	I0114 10:36:20.404468   25093 host.go:66] Checking if "multinode-103159-m02" exists ...
	I0114 10:36:20.404727   25093 main.go:134] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0114 10:36:20.404759   25093 main.go:134] libmachine: Launching plugin server for driver kvm2
	I0114 10:36:20.419686   25093 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:42953
	I0114 10:36:20.420022   25093 main.go:134] libmachine: () Calling .GetVersion
	I0114 10:36:20.420491   25093 main.go:134] libmachine: Using API Version  1
	I0114 10:36:20.420517   25093 main.go:134] libmachine: () Calling .SetConfigRaw
	I0114 10:36:20.420806   25093 main.go:134] libmachine: () Calling .GetMachineName
	I0114 10:36:20.420949   25093 main.go:134] libmachine: (multinode-103159-m02) Calling .DriverName
	I0114 10:36:20.421100   25093 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0114 10:36:20.421119   25093 main.go:134] libmachine: (multinode-103159-m02) Calling .GetSSHHostname
	I0114 10:36:20.423423   25093 main.go:134] libmachine: (multinode-103159-m02) DBG | domain multinode-103159-m02 has defined MAC address 52:54:00:52:64:50 in network mk-multinode-103159
	I0114 10:36:20.423771   25093 main.go:134] libmachine: (multinode-103159-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:52:64:50", ip: ""} in network mk-multinode-103159: {Iface:virbr1 ExpiryTime:2023-01-14 11:33:31 +0000 UTC Type:0 Mac:52:54:00:52:64:50 Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:multinode-103159-m02 Clientid:01:52:54:00:52:64:50}
	I0114 10:36:20.423802   25093 main.go:134] libmachine: (multinode-103159-m02) DBG | domain multinode-103159-m02 has defined IP address 192.168.39.206 and MAC address 52:54:00:52:64:50 in network mk-multinode-103159
	I0114 10:36:20.423923   25093 main.go:134] libmachine: (multinode-103159-m02) Calling .GetSSHPort
	I0114 10:36:20.424089   25093 main.go:134] libmachine: (multinode-103159-m02) Calling .GetSSHKeyPath
	I0114 10:36:20.424235   25093 main.go:134] libmachine: (multinode-103159-m02) Calling .GetSSHUsername
	I0114 10:36:20.424442   25093 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15642-7076/.minikube/machines/multinode-103159-m02/id_rsa Username:docker}
	I0114 10:36:20.517377   25093 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0114 10:36:20.529959   25093 status.go:257] multinode-103159-m02 status: &{Name:multinode-103159-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0114 10:36:20.530005   25093 status.go:255] checking status of multinode-103159-m03 ...
	I0114 10:36:20.530396   25093 main.go:134] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0114 10:36:20.530431   25093 main.go:134] libmachine: Launching plugin server for driver kvm2
	I0114 10:36:20.545255   25093 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:42471
	I0114 10:36:20.545736   25093 main.go:134] libmachine: () Calling .GetVersion
	I0114 10:36:20.546295   25093 main.go:134] libmachine: Using API Version  1
	I0114 10:36:20.546321   25093 main.go:134] libmachine: () Calling .SetConfigRaw
	I0114 10:36:20.546631   25093 main.go:134] libmachine: () Calling .GetMachineName
	I0114 10:36:20.546815   25093 main.go:134] libmachine: (multinode-103159-m03) Calling .GetState
	I0114 10:36:20.548263   25093 status.go:330] multinode-103159-m03 host status = "Stopped" (err=<nil>)
	I0114 10:36:20.548272   25093 status.go:343] host is not running, skipping remaining checks
	I0114 10:36:20.548278   25093 status.go:257] multinode-103159-m03 status: &{Name:multinode-103159-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.23s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (61.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:252: (dbg) Run:  out/minikube-linux-amd64 -p multinode-103159 node start m03 --alsologtostderr
E0114 10:36:36.136499   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/ingress-addon-legacy-102444/client.crt: no such file or directory
E0114 10:37:03.821409   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/ingress-addon-legacy-102444/client.crt: no such file or directory
multinode_test.go:252: (dbg) Done: out/minikube-linux-amd64 -p multinode-103159 node start m03 --alsologtostderr: (1m0.497106492s)
multinode_test.go:259: (dbg) Run:  out/minikube-linux-amd64 -p multinode-103159 status
multinode_test.go:273: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (61.17s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (530.62s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:281: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-103159
multinode_test.go:288: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-103159
E0114 10:38:52.032140   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/functional-102121/client.crt: no such file or directory
E0114 10:39:28.386006   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/addons-100659/client.crt: no such file or directory
multinode_test.go:288: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-103159: (3m4.541173793s)
multinode_test.go:293: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-103159 --wait=true -v=8 --alsologtostderr
E0114 10:40:51.430202   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/addons-100659/client.crt: no such file or directory
E0114 10:41:36.135990   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/ingress-addon-legacy-102444/client.crt: no such file or directory
E0114 10:43:52.030203   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/functional-102121/client.crt: no such file or directory
E0114 10:44:28.385270   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/addons-100659/client.crt: no such file or directory
E0114 10:45:15.078931   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/functional-102121/client.crt: no such file or directory
multinode_test.go:293: (dbg) Done: out/minikube-linux-amd64 start -p multinode-103159 --wait=true -v=8 --alsologtostderr: (5m45.948884896s)
multinode_test.go:298: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-103159
--- PASS: TestMultiNode/serial/RestartKeepsNodes (530.62s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p multinode-103159 node delete m03
multinode_test.go:392: (dbg) Done: out/minikube-linux-amd64 -p multinode-103159 node delete m03: (1.52480093s)
multinode_test.go:398: (dbg) Run:  out/minikube-linux-amd64 -p multinode-103159 status --alsologtostderr
multinode_test.go:422: (dbg) Run:  kubectl get nodes
multinode_test.go:430: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.08s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (183.53s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:312: (dbg) Run:  out/minikube-linux-amd64 -p multinode-103159 stop
E0114 10:46:36.136347   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/ingress-addon-legacy-102444/client.crt: no such file or directory
E0114 10:47:59.182471   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/ingress-addon-legacy-102444/client.crt: no such file or directory
E0114 10:48:52.032676   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/functional-102121/client.crt: no such file or directory
multinode_test.go:312: (dbg) Done: out/minikube-linux-amd64 -p multinode-103159 stop: (3m3.324615166s)
multinode_test.go:318: (dbg) Run:  out/minikube-linux-amd64 -p multinode-103159 status
multinode_test.go:318: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-103159 status: exit status 7 (101.053152ms)

                                                
                                                
-- stdout --
	multinode-103159
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-103159-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p multinode-103159 status --alsologtostderr
multinode_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-103159 status --alsologtostderr: exit status 7 (104.086664ms)

                                                
                                                
-- stdout --
	multinode-103159
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-103159-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0114 10:49:17.916668   26306 out.go:296] Setting OutFile to fd 1 ...
	I0114 10:49:17.917042   26306 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0114 10:49:17.917053   26306 out.go:309] Setting ErrFile to fd 2...
	I0114 10:49:17.917061   26306 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0114 10:49:17.917324   26306 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15642-7076/.minikube/bin
	I0114 10:49:17.917627   26306 out.go:303] Setting JSON to false
	I0114 10:49:17.917675   26306 mustload.go:65] Loading cluster: multinode-103159
	I0114 10:49:17.918038   26306 notify.go:220] Checking for updates...
	I0114 10:49:17.918760   26306 config.go:180] Loaded profile config "multinode-103159": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.25.3
	I0114 10:49:17.918785   26306 status.go:255] checking status of multinode-103159 ...
	I0114 10:49:17.919213   26306 main.go:134] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0114 10:49:17.919259   26306 main.go:134] libmachine: Launching plugin server for driver kvm2
	I0114 10:49:17.933879   26306 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:34045
	I0114 10:49:17.934247   26306 main.go:134] libmachine: () Calling .GetVersion
	I0114 10:49:17.934851   26306 main.go:134] libmachine: Using API Version  1
	I0114 10:49:17.934878   26306 main.go:134] libmachine: () Calling .SetConfigRaw
	I0114 10:49:17.935217   26306 main.go:134] libmachine: () Calling .GetMachineName
	I0114 10:49:17.935398   26306 main.go:134] libmachine: (multinode-103159) Calling .GetState
	I0114 10:49:17.937038   26306 status.go:330] multinode-103159 host status = "Stopped" (err=<nil>)
	I0114 10:49:17.937053   26306 status.go:343] host is not running, skipping remaining checks
	I0114 10:49:17.937058   26306 status.go:257] multinode-103159 status: &{Name:multinode-103159 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0114 10:49:17.937071   26306 status.go:255] checking status of multinode-103159-m02 ...
	I0114 10:49:17.937367   26306 main.go:134] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0114 10:49:17.937396   26306 main.go:134] libmachine: Launching plugin server for driver kvm2
	I0114 10:49:17.951677   26306 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:42225
	I0114 10:49:17.952076   26306 main.go:134] libmachine: () Calling .GetVersion
	I0114 10:49:17.952496   26306 main.go:134] libmachine: Using API Version  1
	I0114 10:49:17.952516   26306 main.go:134] libmachine: () Calling .SetConfigRaw
	I0114 10:49:17.952797   26306 main.go:134] libmachine: () Calling .GetMachineName
	I0114 10:49:17.952970   26306 main.go:134] libmachine: (multinode-103159-m02) Calling .GetState
	I0114 10:49:17.954492   26306 status.go:330] multinode-103159-m02 host status = "Stopped" (err=<nil>)
	I0114 10:49:17.954509   26306 status.go:343] host is not running, skipping remaining checks
	I0114 10:49:17.954516   26306 status.go:257] multinode-103159-m02 status: &{Name:multinode-103159-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (183.53s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (267.54s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:352: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-103159 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E0114 10:49:28.385549   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/addons-100659/client.crt: no such file or directory
E0114 10:51:36.136586   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/ingress-addon-legacy-102444/client.crt: no such file or directory
multinode_test.go:352: (dbg) Done: out/minikube-linux-amd64 start -p multinode-103159 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (4m26.979720365s)
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-103159 status --alsologtostderr
multinode_test.go:372: (dbg) Run:  kubectl get nodes
multinode_test.go:380: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (267.54s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (55.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:441: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-103159
multinode_test.go:450: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-103159-m02 --driver=kvm2  --container-runtime=containerd
multinode_test.go:450: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-103159-m02 --driver=kvm2  --container-runtime=containerd: exit status 14 (91.126148ms)

                                                
                                                
-- stdout --
	* [multinode-103159-m02] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15642
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15642-7076/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15642-7076/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-103159-m02' is duplicated with machine name 'multinode-103159-m02' in profile 'multinode-103159'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:458: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-103159-m03 --driver=kvm2  --container-runtime=containerd
E0114 10:53:52.032166   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/functional-102121/client.crt: no such file or directory
E0114 10:54:28.385764   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/addons-100659/client.crt: no such file or directory
multinode_test.go:458: (dbg) Done: out/minikube-linux-amd64 start -p multinode-103159-m03 --driver=kvm2  --container-runtime=containerd: (54.507095891s)
multinode_test.go:465: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-103159
multinode_test.go:465: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-103159: exit status 80 (248.478776ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-103159
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: Node multinode-103159-m03 already exists in multinode-103159-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:470: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-103159-m03
multinode_test.go:470: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-103159-m03: (1.027356477s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (55.94s)

                                                
                                    
x
+
TestScheduledStopUnix (125.18s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-105756 --memory=2048 --driver=kvm2  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-105756 --memory=2048 --driver=kvm2  --container-runtime=containerd: (53.25544962s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-105756 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-105756 -n scheduled-stop-105756
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-105756 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-105756 --cancel-scheduled
E0114 10:58:52.031343   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/functional-102121/client.crt: no such file or directory
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-105756 -n scheduled-stop-105756
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-105756
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-105756 --schedule 15s
E0114 10:59:28.386108   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/addons-100659/client.crt: no such file or directory
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-105756
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-105756: exit status 7 (90.079703ms)

                                                
                                                
-- stdout --
	scheduled-stop-105756
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-105756 -n scheduled-stop-105756
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-105756 -n scheduled-stop-105756: exit status 7 (87.313361ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-105756" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-105756
--- PASS: TestScheduledStopUnix (125.18s)

                                                
                                    
x
+
TestKubernetesUpgrade (227.55s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:229: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-110001 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:229: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-110001 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m37.89164247s)
version_upgrade_test.go:234: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-110001
version_upgrade_test.go:234: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-110001: (2.121085072s)
version_upgrade_test.go:239: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-110001 status --format={{.Host}}
version_upgrade_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-110001 status --format={{.Host}}: exit status 7 (100.617524ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:241: status error: exit status 7 (may be ok)
version_upgrade_test.go:250: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-110001 --memory=2200 --kubernetes-version=v1.25.3 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:250: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-110001 --memory=2200 --kubernetes-version=v1.25.3 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m43.065021249s)
version_upgrade_test.go:255: (dbg) Run:  kubectl --context kubernetes-upgrade-110001 version --output=json
version_upgrade_test.go:274: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:276: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-110001 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:276: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-110001 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2  --container-runtime=containerd: exit status 106 (136.670722ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-110001] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15642
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15642-7076/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15642-7076/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.25.3 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-110001
	    minikube start -p kubernetes-upgrade-110001 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-1100012 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.25.3, by running:
	    
	    minikube start -p kubernetes-upgrade-110001 --kubernetes-version=v1.25.3
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:280: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:282: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-110001 --memory=2200 --kubernetes-version=v1.25.3 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:282: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-110001 --memory=2200 --kubernetes-version=v1.25.3 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (22.741287651s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-110001" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-110001
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-110001: (1.397327172s)
--- PASS: TestKubernetesUpgrade (227.55s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-110001 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=containerd

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-110001 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=containerd: exit status 14 (113.772032ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-110001] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15642
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15642-7076/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15642-7076/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (106.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-110001 --driver=kvm2  --container-runtime=containerd

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-110001 --driver=kvm2  --container-runtime=containerd: (1m45.767723342s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-110001 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (106.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (25.5s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-110001 --no-kubernetes --driver=kvm2  --container-runtime=containerd
E0114 11:01:55.079960   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/functional-102121/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-110001 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (23.912238284s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-110001 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-110001 status -o json: exit status 2 (305.4686ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-110001","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-110001
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-110001: (1.281652002s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (25.50s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.79s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.79s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (27.5s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-110001 --no-kubernetes --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-110001 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (27.499659508s)
--- PASS: TestNoKubernetes/serial/Start (27.50s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-110001 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-110001 "sudo systemctl is-active --quiet service kubelet": exit status 1 (274.041284ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (49.81s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (17.938584583s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json

                                                
                                                
=== CONT  TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:179: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (31.873895278s)
--- PASS: TestNoKubernetes/serial/ProfileList (49.81s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.98s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-110001
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-110001: (1.977289556s)
--- PASS: TestNoKubernetes/serial/Stop (1.98s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (26.73s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-110001 --driver=kvm2  --container-runtime=containerd

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-110001 --driver=kvm2  --container-runtime=containerd: (26.726529991s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (26.73s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-110001 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-110001 "sudo systemctl is-active --quiet service kubelet": exit status 1 (293.864745ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (0.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:220: (dbg) Run:  out/minikube-linux-amd64 start -p false-110401 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=containerd
net_test.go:220: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-110401 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=containerd: exit status 14 (202.96468ms)

                                                
                                                
-- stdout --
	* [false-110401] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15642
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15642-7076/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15642-7076/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0114 11:04:01.186479   30890 out.go:296] Setting OutFile to fd 1 ...
	I0114 11:04:01.186695   30890 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0114 11:04:01.186708   30890 out.go:309] Setting ErrFile to fd 2...
	I0114 11:04:01.186715   30890 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0114 11:04:01.186869   30890 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15642-7076/.minikube/bin
	I0114 11:04:01.187658   30890 out.go:303] Setting JSON to false
	I0114 11:04:01.188860   30890 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":6389,"bootTime":1673687853,"procs":234,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1027-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0114 11:04:01.188946   30890 start.go:135] virtualization: kvm guest
	I0114 11:04:01.191787   30890 out.go:177] * [false-110401] minikube v1.28.0 on Ubuntu 20.04 (kvm/amd64)
	I0114 11:04:01.193362   30890 out.go:177]   - MINIKUBE_LOCATION=15642
	I0114 11:04:01.193308   30890 notify.go:220] Checking for updates...
	I0114 11:04:01.196994   30890 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0114 11:04:01.198789   30890 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/15642-7076/kubeconfig
	I0114 11:04:01.200569   30890 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/15642-7076/.minikube
	I0114 11:04:01.202223   30890 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0114 11:04:01.204383   30890 config.go:180] Loaded profile config "force-systemd-env-110349": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.25.3
	I0114 11:04:01.204562   30890 config.go:180] Loaded profile config "running-upgrade-110001": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0114 11:04:01.204695   30890 config.go:180] Loaded profile config "stopped-upgrade-110158": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0114 11:04:01.204775   30890 driver.go:365] Setting default libvirt URI to qemu:///system
	I0114 11:04:01.251431   30890 out.go:177] * Using the kvm2 driver based on user configuration
	I0114 11:04:01.252965   30890 start.go:294] selected driver: kvm2
	I0114 11:04:01.252992   30890 start.go:838] validating driver "kvm2" against <nil>
	I0114 11:04:01.253017   30890 start.go:849] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0114 11:04:01.255416   30890 out.go:177] 
	W0114 11:04:01.257150   30890 out.go:239] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0114 11:04:01.258781   30890 out.go:177] 

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "false-110401" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-110401
--- PASS: TestNetworkPlugins/group/false (0.50s)

                                                
                                    
x
+
TestPause/serial/Start (111.58s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-110614 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=containerd

                                                
                                                
=== CONT  TestPause/serial/Start
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-110614 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=containerd: (1m51.577275651s)
--- PASS: TestPause/serial/Start (111.58s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.72s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:213: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-110158
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.72s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (78.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p auto-110400 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=kvm2  --container-runtime=containerd

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/Start
net_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p auto-110400 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=kvm2  --container-runtime=containerd: (1m18.037023157s)
--- PASS: TestNetworkPlugins/group/auto/Start (78.04s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (30.91s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-110614 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd

                                                
                                                
=== CONT  TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-110614 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (30.873955862s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (30.91s)

                                                
                                    
x
+
TestPause/serial/Pause (0.97s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-110614 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.97s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.42s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-110614 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-110614 --output=json --layout=cluster: exit status 2 (415.738859ms)

                                                
                                                
-- stdout --
	{"Name":"pause-110614","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.28.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-110614","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.42s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.95s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-110614 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.95s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.02s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-110614 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-amd64 pause -p pause-110614 --alsologtostderr -v=5: (1.015543015s)
--- PASS: TestPause/serial/PauseAgain (1.02s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.25s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-110614 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-110614 --alsologtostderr -v=5: (1.249556028s)
--- PASS: TestPause/serial/DeletePaused (1.25s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (20.06s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json

                                                
                                                
=== CONT  TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (20.05634699s)
--- PASS: TestPause/serial/VerifyDeletedResources (20.06s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (81.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-110401 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=kvm2  --container-runtime=containerd
E0114 11:08:52.030897   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/functional-102121/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/Start
net_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-110401 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=kvm2  --container-runtime=containerd: (1m21.854803901s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (81.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-110400 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Start (144.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Start
net_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p cilium-110401 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=kvm2  --container-runtime=containerd

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/Start
net_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p cilium-110401 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=kvm2  --container-runtime=containerd: (2m24.022716374s)
--- PASS: TestNetworkPlugins/group/cilium/Start (144.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context auto-110400 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-5788d667bd-dvjqj" [fd424035-7243-442a-bfbf-d7eab9dce2e7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-5788d667bd-dvjqj" [fd424035-7243-442a-bfbf-d7eab9dce2e7] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.010516444s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:169: (dbg) Run:  kubectl --context auto-110400 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:188: (dbg) Run:  kubectl --context auto-110400 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:238: (dbg) Run:  kubectl --context auto-110400 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (375.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p calico-110401 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=kvm2  --container-runtime=containerd
E0114 11:09:28.385351   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/addons-100659/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/Start
net_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p calico-110401 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=kvm2  --container-runtime=containerd: (6m15.823140217s)
--- PASS: TestNetworkPlugins/group/calico/Start (375.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:109: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:342: "kindnet-4gcs4" [0ac8d3ee-0bed-4201-bfbc-56aaa9614c73] Running
net_test.go:109: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.020967521s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-110401 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context kindnet-110401 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-5788d667bd-wxkps" [dccd4a2a-3e37-464b-80ac-a22e1261c98a] Pending
helpers_test.go:342: "netcat-5788d667bd-wxkps" [dccd4a2a-3e37-464b-80ac-a22e1261c98a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-5788d667bd-wxkps" [dccd4a2a-3e37-464b-80ac-a22e1261c98a] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.017181391s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:169: (dbg) Run:  kubectl --context kindnet-110401 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:188: (dbg) Run:  kubectl --context kindnet-110401 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:238: (dbg) Run:  kubectl --context kindnet-110401 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (91.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-110401 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=containerd

                                                
                                                
=== CONT  TestNetworkPlugins/group/custom-flannel/Start
net_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-110401 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=containerd: (1m31.833460798s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (91.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/ControllerPod (5.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/ControllerPod
net_test.go:109: (dbg) TestNetworkPlugins/group/cilium/ControllerPod: waiting 10m0s for pods matching "k8s-app=cilium" in namespace "kube-system" ...
helpers_test.go:342: "cilium-c9jdj" [c74a3496-5f07-4bc8-89e2-ec598770e84a] Running
net_test.go:109: (dbg) TestNetworkPlugins/group/cilium/ControllerPod: k8s-app=cilium healthy within 5.047290393s
--- PASS: TestNetworkPlugins/group/cilium/ControllerPod (5.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-linux-amd64 ssh -p cilium-110401 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/cilium/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/NetCatPod (12.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context cilium-110401 replace --force -f testdata/netcat-deployment.yaml
net_test.go:138: (dbg) Done: kubectl --context cilium-110401 replace --force -f testdata/netcat-deployment.yaml: (1.418249272s)
net_test.go:152: (dbg) TestNetworkPlugins/group/cilium/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-5788d667bd-vnkt2" [58d8f1f9-487f-45c9-91f6-3dfcca2a014a] Pending
helpers_test.go:342: "netcat-5788d667bd-vnkt2" [58d8f1f9-487f-45c9-91f6-3dfcca2a014a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0114 11:11:36.136907   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/ingress-addon-legacy-102444/client.crt: no such file or directory
helpers_test.go:342: "netcat-5788d667bd-vnkt2" [58d8f1f9-487f-45c9-91f6-3dfcca2a014a] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/cilium/NetCatPod: app=netcat healthy within 11.015080074s
--- PASS: TestNetworkPlugins/group/cilium/NetCatPod (12.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/DNS (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/DNS
net_test.go:169: (dbg) Run:  kubectl --context cilium-110401 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/cilium/DNS (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Localhost
net_test.go:188: (dbg) Run:  kubectl --context cilium-110401 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/cilium/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/HairPin
net_test.go:238: (dbg) Run:  kubectl --context cilium-110401 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/cilium/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (129.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-110401 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=kvm2  --container-runtime=containerd

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-110401 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=kvm2  --container-runtime=containerd: (2m9.80707845s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (129.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-110401 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (22.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context custom-flannel-110401 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-5788d667bd-2pn6p" [7f77f41b-9f4e-4c5f-aed4-6007016b4257] Pending
helpers_test.go:342: "netcat-5788d667bd-2pn6p" [7f77f41b-9f4e-4c5f-aed4-6007016b4257] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-5788d667bd-2pn6p" [7f77f41b-9f4e-4c5f-aed4-6007016b4257] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 22.015783735s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (22.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:169: (dbg) Run:  kubectl --context custom-flannel-110401 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:188: (dbg) Run:  kubectl --context custom-flannel-110401 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:238: (dbg) Run:  kubectl --context custom-flannel-110401 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (75.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-110401 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=flannel --driver=kvm2  --container-runtime=containerd
net_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p flannel-110401 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=flannel --driver=kvm2  --container-runtime=containerd: (1m15.856706754s)
--- PASS: TestNetworkPlugins/group/flannel/Start (75.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (7.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:109: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-system" ...
helpers_test.go:342: "kube-flannel-ds-amd64-9krfv" [b7ccc05f-0874-461c-829f-8387b223fdf4] Pending / Ready:ContainersNotReady (containers with unready status: [kube-flannel]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-flannel])
helpers_test.go:342: "kube-flannel-ds-amd64-9krfv" [b7ccc05f-0874-461c-829f-8387b223fdf4] Running
net_test.go:109: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 7.022099552s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (7.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-110401 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context flannel-110401 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-5788d667bd-s2bv9" [209b4e17-9211-41c8-bf6c-13ef486ba8ab] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0114 11:13:52.030196   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/functional-102121/client.crt: no such file or directory
helpers_test.go:342: "netcat-5788d667bd-s2bv9" [209b4e17-9211-41c8-bf6c-13ef486ba8ab] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:152: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.011645532s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-110401 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context enable-default-cni-110401 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-5788d667bd-jfs4j" [6459f9ca-09a7-423b-af19-3cfe8d16ef44] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/NetCatPod
helpers_test.go:342: "netcat-5788d667bd-jfs4j" [6459f9ca-09a7-423b-af19-3cfe8d16ef44] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:152: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.009382399s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:169: (dbg) Run:  kubectl --context flannel-110401 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:188: (dbg) Run:  kubectl --context flannel-110401 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:238: (dbg) Run:  kubectl --context flannel-110401 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (112.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-110401 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=kvm2  --container-runtime=containerd
E0114 11:14:02.261662   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/auto-110400/client.crt: no such file or directory
E0114 11:14:02.266968   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/auto-110400/client.crt: no such file or directory
E0114 11:14:02.277202   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/auto-110400/client.crt: no such file or directory
E0114 11:14:02.297627   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/auto-110400/client.crt: no such file or directory
E0114 11:14:02.338030   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/auto-110400/client.crt: no such file or directory
E0114 11:14:02.418340   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/auto-110400/client.crt: no such file or directory
E0114 11:14:02.578714   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/auto-110400/client.crt: no such file or directory
E0114 11:14:02.899744   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/auto-110400/client.crt: no such file or directory
E0114 11:14:03.539973   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/auto-110400/client.crt: no such file or directory
E0114 11:14:04.820207   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/auto-110400/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/Start
net_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p bridge-110401 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=kvm2  --container-runtime=containerd: (1m52.774420273s)
--- PASS: TestNetworkPlugins/group/bridge/Start (112.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-110401 exec deployment/netcat -- nslookup kubernetes.default
E0114 11:14:07.380747   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/auto-110400/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:188: (dbg) Run:  kubectl --context enable-default-cni-110401 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:238: (dbg) Run:  kubectl --context enable-default-cni-110401 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (161.63s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-111409 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.16.0
E0114 11:14:11.431311   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/addons-100659/client.crt: no such file or directory
E0114 11:14:12.501903   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/auto-110400/client.crt: no such file or directory
E0114 11:14:22.742590   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/auto-110400/client.crt: no such file or directory
E0114 11:14:28.384749   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/addons-100659/client.crt: no such file or directory
E0114 11:14:43.223441   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/auto-110400/client.crt: no such file or directory
E0114 11:15:10.017798   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/kindnet-110401/client.crt: no such file or directory
E0114 11:15:10.023080   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/kindnet-110401/client.crt: no such file or directory
E0114 11:15:10.033398   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/kindnet-110401/client.crt: no such file or directory
E0114 11:15:10.053718   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/kindnet-110401/client.crt: no such file or directory
E0114 11:15:10.094008   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/kindnet-110401/client.crt: no such file or directory
E0114 11:15:10.174370   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/kindnet-110401/client.crt: no such file or directory
E0114 11:15:10.334829   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/kindnet-110401/client.crt: no such file or directory
E0114 11:15:10.655223   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/kindnet-110401/client.crt: no such file or directory
E0114 11:15:11.296032   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/kindnet-110401/client.crt: no such file or directory
E0114 11:15:12.576265   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/kindnet-110401/client.crt: no such file or directory
E0114 11:15:15.137417   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/kindnet-110401/client.crt: no such file or directory
E0114 11:15:20.258576   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/kindnet-110401/client.crt: no such file or directory
E0114 11:15:24.183801   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/auto-110400/client.crt: no such file or directory
E0114 11:15:30.499299   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/kindnet-110401/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-111409 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.16.0: (2m41.627906205s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (161.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:109: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:342: "calico-node-hbwxl" [ad49b8b7-ba5c-4581-852f-c4a109b3ee2d] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
net_test.go:109: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.028509403s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-110401 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context calico-110401 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-5788d667bd-lgxrc" [5c50f1f7-d041-40fd-b5c5-d62537844bfc] Pending
helpers_test.go:342: "netcat-5788d667bd-lgxrc" [5c50f1f7-d041-40fd-b5c5-d62537844bfc] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-5788d667bd-lgxrc" [5c50f1f7-d041-40fd-b5c5-d62537844bfc] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.017628218s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:169: (dbg) Run:  kubectl --context calico-110401 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:188: (dbg) Run:  kubectl --context calico-110401 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:238: (dbg) Run:  kubectl --context calico-110401 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (96.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-111550 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.25.3
E0114 11:15:50.979442   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/kindnet-110401/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-111550 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.25.3: (1m36.184510981s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (96.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-110401 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (13.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context bridge-110401 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-5788d667bd-jr7kv" [8dad071d-0871-4acd-9bd1-2f99b158679e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-5788d667bd-jr7kv" [8dad071d-0871-4acd-9bd1-2f99b158679e] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 13.010941911s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (13.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Run:  kubectl --context bridge-110401 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:188: (dbg) Run:  kubectl --context bridge-110401 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:238: (dbg) Run:  kubectl --context bridge-110401 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.16s)
E0114 11:28:07.649276   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/no-preload-111550/client.crt: no such file or directory

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (139.67s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-111609 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.25.3
E0114 11:16:25.893926   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/cilium-110401/client.crt: no such file or directory
E0114 11:16:25.899305   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/cilium-110401/client.crt: no such file or directory
E0114 11:16:25.909570   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/cilium-110401/client.crt: no such file or directory
E0114 11:16:25.929848   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/cilium-110401/client.crt: no such file or directory
E0114 11:16:25.970545   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/cilium-110401/client.crt: no such file or directory
E0114 11:16:26.050883   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/cilium-110401/client.crt: no such file or directory
E0114 11:16:26.212095   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/cilium-110401/client.crt: no such file or directory
E0114 11:16:26.532441   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/cilium-110401/client.crt: no such file or directory
E0114 11:16:27.173361   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/cilium-110401/client.crt: no such file or directory
E0114 11:16:28.454562   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/cilium-110401/client.crt: no such file or directory
E0114 11:16:31.015532   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/cilium-110401/client.crt: no such file or directory
E0114 11:16:31.940022   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/kindnet-110401/client.crt: no such file or directory
E0114 11:16:36.136377   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/cilium-110401/client.crt: no such file or directory
E0114 11:16:36.136457   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/ingress-addon-legacy-102444/client.crt: no such file or directory
E0114 11:16:46.104617   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/auto-110400/client.crt: no such file or directory
E0114 11:16:46.377358   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/cilium-110401/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-111609 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.25.3: (2m19.668213417s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (139.67s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (11.58s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-111409 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [cb1dd12a-263b-4fe2-879e-5da8e8efc93a] Pending
helpers_test.go:342: "busybox" [cb1dd12a-263b-4fe2-879e-5da8e8efc93a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [cb1dd12a-263b-4fe2-879e-5da8e8efc93a] Running
E0114 11:17:00.968940   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/custom-flannel-110401/client.crt: no such file or directory
E0114 11:17:00.974240   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/custom-flannel-110401/client.crt: no such file or directory
E0114 11:17:00.984568   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/custom-flannel-110401/client.crt: no such file or directory
E0114 11:17:01.004865   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/custom-flannel-110401/client.crt: no such file or directory
E0114 11:17:01.045368   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/custom-flannel-110401/client.crt: no such file or directory
E0114 11:17:01.126106   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/custom-flannel-110401/client.crt: no such file or directory
E0114 11:17:01.286542   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/custom-flannel-110401/client.crt: no such file or directory
E0114 11:17:01.607287   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/custom-flannel-110401/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 11.031501968s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-111409 exec busybox -- /bin/sh -c "ulimit -n"
E0114 11:17:02.247557   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/custom-flannel-110401/client.crt: no such file or directory
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (11.58s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (3.94s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-111409 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0114 11:17:03.527773   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/custom-flannel-110401/client.crt: no such file or directory
E0114 11:17:06.087934   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/custom-flannel-110401/client.crt: no such file or directory
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-111409 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (3.823763647s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-111409 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (3.94s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (102.49s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-111409 --alsologtostderr -v=3
E0114 11:17:06.857586   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/cilium-110401/client.crt: no such file or directory
E0114 11:17:11.209073   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/custom-flannel-110401/client.crt: no such file or directory
E0114 11:17:21.450033   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/custom-flannel-110401/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-111409 --alsologtostderr -v=3: (1m42.489259768s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (102.49s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.47s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-111550 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [529dc14f-1fbf-4f90-9b3b-3a7dbd611001] Pending
helpers_test.go:342: "busybox" [529dc14f-1fbf-4f90-9b3b-3a7dbd611001] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [529dc14f-1fbf-4f90-9b3b-3a7dbd611001] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.024851102s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-111550 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.47s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-111550 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-111550 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.077034513s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-111550 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (92.45s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-111550 --alsologtostderr -v=3
E0114 11:17:41.930189   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/custom-flannel-110401/client.crt: no such file or directory
E0114 11:17:47.818386   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/cilium-110401/client.crt: no such file or directory
E0114 11:17:53.860307   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/kindnet-110401/client.crt: no such file or directory
E0114 11:18:22.891181   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/custom-flannel-110401/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-111550 --alsologtostderr -v=3: (1m32.445265695s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (92.45s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.44s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-111609 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [73e6e359-50e7-45f9-97da-1b6533886817] Pending
helpers_test.go:342: "busybox" [73e6e359-50e7-45f9-97da-1b6533886817] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [73e6e359-50e7-45f9-97da-1b6533886817] Running
E0114 11:18:35.080256   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/functional-102121/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.024861786s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-111609 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.44s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-111609 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-111609 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (102.46s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-111609 --alsologtostderr -v=3
E0114 11:18:41.013009   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/flannel-110401/client.crt: no such file or directory
E0114 11:18:41.018278   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/flannel-110401/client.crt: no such file or directory
E0114 11:18:41.028566   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/flannel-110401/client.crt: no such file or directory
E0114 11:18:41.048929   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/flannel-110401/client.crt: no such file or directory
E0114 11:18:41.089221   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/flannel-110401/client.crt: no such file or directory
E0114 11:18:41.169558   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/flannel-110401/client.crt: no such file or directory
E0114 11:18:41.329977   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/flannel-110401/client.crt: no such file or directory
E0114 11:18:41.651074   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/flannel-110401/client.crt: no such file or directory
E0114 11:18:42.291934   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/flannel-110401/client.crt: no such file or directory
E0114 11:18:43.573089   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/flannel-110401/client.crt: no such file or directory
E0114 11:18:46.133376   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/flannel-110401/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-111609 --alsologtostderr -v=3: (1m42.463570776s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (102.46s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-111409 -n old-k8s-version-111409
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-111409 -n old-k8s-version-111409: exit status 7 (107.084733ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-111409 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (519.44s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-111409 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.16.0
E0114 11:18:51.253841   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/flannel-110401/client.crt: no such file or directory
E0114 11:18:52.031178   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/functional-102121/client.crt: no such file or directory
E0114 11:18:56.191351   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/enable-default-cni-110401/client.crt: no such file or directory
E0114 11:18:56.196691   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/enable-default-cni-110401/client.crt: no such file or directory
E0114 11:18:56.206967   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/enable-default-cni-110401/client.crt: no such file or directory
E0114 11:18:56.227272   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/enable-default-cni-110401/client.crt: no such file or directory
E0114 11:18:56.267541   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/enable-default-cni-110401/client.crt: no such file or directory
E0114 11:18:56.347901   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/enable-default-cni-110401/client.crt: no such file or directory
E0114 11:18:56.508312   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/enable-default-cni-110401/client.crt: no such file or directory
E0114 11:18:56.829047   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/enable-default-cni-110401/client.crt: no such file or directory
E0114 11:18:57.469990   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/enable-default-cni-110401/client.crt: no such file or directory
E0114 11:18:58.750201   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/enable-default-cni-110401/client.crt: no such file or directory
E0114 11:19:01.310859   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/enable-default-cni-110401/client.crt: no such file or directory
E0114 11:19:01.494394   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/flannel-110401/client.crt: no such file or directory
E0114 11:19:02.260976   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/auto-110400/client.crt: no such file or directory
E0114 11:19:06.431717   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/enable-default-cni-110401/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-111409 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.16.0: (8m39.128102204s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-111409 -n old-k8s-version-111409
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (519.44s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-111550 -n no-preload-111550
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-111550 -n no-preload-111550: exit status 7 (105.243307ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-111550 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (351.57s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-111550 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.25.3
E0114 11:19:09.738637   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/cilium-110401/client.crt: no such file or directory
E0114 11:19:16.672736   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/enable-default-cni-110401/client.crt: no such file or directory
E0114 11:19:21.974802   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/flannel-110401/client.crt: no such file or directory
E0114 11:19:28.385411   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/addons-100659/client.crt: no such file or directory
E0114 11:19:29.944851   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/auto-110400/client.crt: no such file or directory
E0114 11:19:37.153245   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/enable-default-cni-110401/client.crt: no such file or directory
E0114 11:19:44.812044   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/custom-flannel-110401/client.crt: no such file or directory
E0114 11:20:02.935311   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/flannel-110401/client.crt: no such file or directory
E0114 11:20:10.018291   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/kindnet-110401/client.crt: no such file or directory
E0114 11:20:18.114112   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/enable-default-cni-110401/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-111550 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.25.3: (5m51.20282077s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-111550 -n no-preload-111550
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (351.57s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-111609 -n embed-certs-111609
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-111609 -n embed-certs-111609: exit status 7 (105.240517ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-111609 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (424.86s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-111609 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.25.3
E0114 11:20:31.420526   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/calico-110401/client.crt: no such file or directory
E0114 11:20:31.425834   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/calico-110401/client.crt: no such file or directory
E0114 11:20:31.436149   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/calico-110401/client.crt: no such file or directory
E0114 11:20:31.456491   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/calico-110401/client.crt: no such file or directory
E0114 11:20:31.496805   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/calico-110401/client.crt: no such file or directory
E0114 11:20:31.577202   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/calico-110401/client.crt: no such file or directory
E0114 11:20:31.738265   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/calico-110401/client.crt: no such file or directory
E0114 11:20:32.059299   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/calico-110401/client.crt: no such file or directory
E0114 11:20:32.700287   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/calico-110401/client.crt: no such file or directory
E0114 11:20:33.981195   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/calico-110401/client.crt: no such file or directory
E0114 11:20:36.541572   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/calico-110401/client.crt: no such file or directory
E0114 11:20:37.701521   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/kindnet-110401/client.crt: no such file or directory
E0114 11:20:41.662281   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/calico-110401/client.crt: no such file or directory
E0114 11:20:51.903192   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/calico-110401/client.crt: no such file or directory
E0114 11:20:54.940869   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/bridge-110401/client.crt: no such file or directory
E0114 11:20:54.946153   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/bridge-110401/client.crt: no such file or directory
E0114 11:20:54.956449   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/bridge-110401/client.crt: no such file or directory
E0114 11:20:54.976770   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/bridge-110401/client.crt: no such file or directory
E0114 11:20:55.017124   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/bridge-110401/client.crt: no such file or directory
E0114 11:20:55.097501   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/bridge-110401/client.crt: no such file or directory
E0114 11:20:55.258154   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/bridge-110401/client.crt: no such file or directory
E0114 11:20:55.578717   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/bridge-110401/client.crt: no such file or directory
E0114 11:20:56.218884   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/bridge-110401/client.crt: no such file or directory
E0114 11:20:57.499573   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/bridge-110401/client.crt: no such file or directory
E0114 11:21:00.059899   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/bridge-110401/client.crt: no such file or directory
E0114 11:21:05.180497   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/bridge-110401/client.crt: no such file or directory
E0114 11:21:12.383403   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/calico-110401/client.crt: no such file or directory
E0114 11:21:15.421478   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/bridge-110401/client.crt: no such file or directory
E0114 11:21:19.184092   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/ingress-addon-legacy-102444/client.crt: no such file or directory
E0114 11:21:24.855498   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/flannel-110401/client.crt: no such file or directory
E0114 11:21:25.894079   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/cilium-110401/client.crt: no such file or directory
E0114 11:21:35.902341   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/bridge-110401/client.crt: no such file or directory
E0114 11:21:36.135973   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/ingress-addon-legacy-102444/client.crt: no such file or directory
E0114 11:21:40.034962   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/enable-default-cni-110401/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-111609 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.25.3: (7m4.551835534s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-111609 -n embed-certs-111609
E0114 11:27:27.325661   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/no-preload-111550/client.crt: no such file or directory
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (424.86s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (13.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-f87d45d87-c7wqz" [aa1fdbb0-3eb1-462a-a956-45b57544c596] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:342: "kubernetes-dashboard-f87d45d87-c7wqz" [aa1fdbb0-3eb1-462a-a956-45b57544c596] Running
E0114 11:25:10.017797   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/kindnet-110401/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 13.018802305s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (13.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-f87d45d87-c7wqz" [aa1fdbb0-3eb1-462a-a956-45b57544c596] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.009722208s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-111550 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p no-preload-111550 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.31s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.84s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-111550 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-111550 -n no-preload-111550
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-111550 -n no-preload-111550: exit status 2 (271.26697ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-111550 -n no-preload-111550
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-111550 -n no-preload-111550: exit status 2 (290.908129ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-111550 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-111550 -n no-preload-111550
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-111550 -n no-preload-111550
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.84s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (81.58s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-112523 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.25.3
E0114 11:25:31.420477   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/calico-110401/client.crt: no such file or directory
E0114 11:25:54.941270   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/bridge-110401/client.crt: no such file or directory
E0114 11:25:59.105237   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/calico-110401/client.crt: no such file or directory
E0114 11:26:22.624862   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/bridge-110401/client.crt: no such file or directory
E0114 11:26:25.893404   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/cilium-110401/client.crt: no such file or directory
E0114 11:26:36.136710   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/ingress-addon-legacy-102444/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-112523 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.25.3: (1m21.581712982s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (81.58s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.41s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-112523 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [c0422632-58e4-40fa-b7f4-41d4f1394c83] Pending
helpers_test.go:342: "busybox" [c0422632-58e4-40fa-b7f4-41d4f1394c83] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [c0422632-58e4-40fa-b7f4-41d4f1394c83] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.020781757s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-112523 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.41s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.95s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-112523 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-112523 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.95s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (92.47s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-112523 --alsologtostderr -v=3
E0114 11:27:00.968238   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/custom-flannel-110401/client.crt: no such file or directory
E0114 11:27:26.687825   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/no-preload-111550/client.crt: no such file or directory
E0114 11:27:26.693119   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/no-preload-111550/client.crt: no such file or directory
E0114 11:27:26.703384   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/no-preload-111550/client.crt: no such file or directory
E0114 11:27:26.723723   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/no-preload-111550/client.crt: no such file or directory
E0114 11:27:26.764042   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/no-preload-111550/client.crt: no such file or directory
E0114 11:27:26.844385   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/no-preload-111550/client.crt: no such file or directory
E0114 11:27:27.005334   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/no-preload-111550/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-112523 --alsologtostderr -v=3: (1m32.467695327s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (92.47s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (17.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-f87d45d87-mtx4l" [d21055f4-723f-4565-956a-e5f3c0e29465] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0114 11:27:27.966670   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/no-preload-111550/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
helpers_test.go:342: "kubernetes-dashboard-f87d45d87-mtx4l" [d21055f4-723f-4565-956a-e5f3c0e29465] Running

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 17.015630954s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (17.02s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-84b68f675b-97cfn" [ac7dc501-636b-473a-8ab6-9e7c4539d212] Running
E0114 11:27:29.246876   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/no-preload-111550/client.crt: no such file or directory
E0114 11:27:31.807998   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/no-preload-111550/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.016590142s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-84b68f675b-97cfn" [ac7dc501-636b-473a-8ab6-9e7c4539d212] Running
E0114 11:27:36.928246   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/no-preload-111550/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.009065037s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-111409 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p old-k8s-version-111409 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.7s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-111409 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-111409 -n old-k8s-version-111409
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-111409 -n old-k8s-version-111409: exit status 2 (270.554866ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-111409 -n old-k8s-version-111409
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-111409 -n old-k8s-version-111409: exit status 2 (282.4943ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-111409 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-111409 -n old-k8s-version-111409
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-111409 -n old-k8s-version-111409
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.70s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (70.03s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-112742 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.25.3

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-112742 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.25.3: (1m10.027457501s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (70.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-f87d45d87-mtx4l" [d21055f4-723f-4565-956a-e5f3c0e29465] Running
E0114 11:27:47.168410   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/no-preload-111550/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.008167321s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-111609 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p embed-certs-111609 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20221004-44d545d1
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.98s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-111609 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-111609 -n embed-certs-111609
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-111609 -n embed-certs-111609: exit status 2 (284.330362ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-111609 -n embed-certs-111609
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-111609 -n embed-certs-111609: exit status 2 (279.497278ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-111609 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-111609 -n embed-certs-111609
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-111609 -n embed-certs-111609
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.98s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-112523 -n default-k8s-diff-port-112523
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-112523 -n default-k8s-diff-port-112523: exit status 7 (89.70217ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-112523 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (416.6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-112523 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.25.3
E0114 11:28:41.012618   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/flannel-110401/client.crt: no such file or directory
E0114 11:28:48.610047   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/no-preload-111550/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-112523 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.25.3: (6m56.283396395s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-112523 -n default-k8s-diff-port-112523
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (416.60s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.02s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-112742 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-112742 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.021653589s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.02s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (2.13s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-112742 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-112742 --alsologtostderr -v=3: (2.131015603s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (2.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-112742 -n newest-cni-112742
E0114 11:28:56.191937   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/enable-default-cni-110401/client.crt: no such file or directory
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-112742 -n newest-cni-112742: exit status 7 (98.952956ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-112742 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (76.45s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-112742 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.25.3
E0114 11:29:02.261683   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/auto-110400/client.crt: no such file or directory
E0114 11:29:28.384892   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/addons-100659/client.crt: no such file or directory
E0114 11:30:10.018523   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/kindnet-110401/client.crt: no such file or directory
E0114 11:30:10.531022   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/no-preload-111550/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-112742 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.25.3: (1m16.169586266s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-112742 -n newest-cni-112742
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (76.45s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p newest-cni-112742 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20221004-44d545d1
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-112742 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-112742 -n newest-cni-112742
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-112742 -n newest-cni-112742: exit status 2 (267.569058ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-112742 -n newest-cni-112742
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-112742 -n newest-cni-112742: exit status 2 (267.532116ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-112742 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-112742 -n newest-cni-112742
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-112742 -n newest-cni-112742
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (16.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-f87d45d87-p7gvs" [4ccd06be-e95c-4e41-a1a9-6bb3c56d7b20] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0114 11:35:31.420387   13921 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15642-7076/.minikube/profiles/calico-110401/client.crt: no such file or directory
helpers_test.go:342: "kubernetes-dashboard-f87d45d87-p7gvs" [4ccd06be-e95c-4e41-a1a9-6bb3c56d7b20] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 16.020183185s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (16.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-f87d45d87-p7gvs" [4ccd06be-e95c-4e41-a1a9-6bb3c56d7b20] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.007962165s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-112523 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p default-k8s-diff-port-112523 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20221004-44d545d1
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.47s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-112523 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-112523 -n default-k8s-diff-port-112523
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-112523 -n default-k8s-diff-port-112523: exit status 2 (255.504737ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-112523 -n default-k8s-diff-port-112523
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-112523 -n default-k8s-diff-port-112523: exit status 2 (261.375605ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-112523 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-112523 -n default-k8s-diff-port-112523
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-112523 -n default-k8s-diff-port-112523
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.47s)

                                                
                                    

Test skip (32/297)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:156: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.25.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.25.3/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.25.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.25.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.25.3/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.25.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.25.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.25.3/kubectl
aaa_download_only_test.go:156: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.25.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:214: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:455: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:35: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:456: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:543: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:88: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:88: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:88: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:88: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:88: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:88: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:88: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Only test none driver.
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:291: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:91: Skipping the test as containerd container runtimes requires CNI
helpers_test.go:175: Cleaning up "kubenet-110400" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-110400
--- SKIP: TestNetworkPlugins/group/kubenet (0.24s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-112523" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-112523
--- SKIP: TestStartStop/group/disable-driver-mounts (0.21s)

                                                
                                    
Copied to clipboard