Test Report: KVM_Linux_containerd 16573

                    
                      2f0304e5caeb910cf6b713a3408f4279364136e7:2023-05-24:29404
                    
                

Test fail (3/300)

Order failed test Duration
213 TestPreload 297.25
219 TestRunningBinaryUpgrade 1916.35
252 TestStoppedBinaryUpgrade/Upgrade 1675.16
x
+
TestPreload (297.25s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-262726 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.24.4
E0524 19:15:33.044304   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/functional-420572/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-262726 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.24.4: (1m58.632754164s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 ssh -p test-preload-262726 -- sudo crictl pull gcr.io/k8s-minikube/busybox
preload_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-262726
E0524 19:17:05.590790   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/addons-934336/client.crt: no such file or directory
preload_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-262726: (1m31.651243094s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-262726 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=containerd
E0524 19:18:48.773626   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/ingress-addon-legacy-889011/client.crt: no such file or directory
E0524 19:19:02.538468   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/addons-934336/client.crt: no such file or directory
preload_test.go:71: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-262726 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=containerd: (1m23.310501606s)
preload_test.go:80: (dbg) Run:  out/minikube-linux-amd64 ssh -p test-preload-262726 -- sudo crictl image ls
preload_test.go:85: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	IMAGE                                     TAG                  IMAGE ID            SIZE
	docker.io/kindest/kindnetd                v20220726-ed811e41   d921cee849482       25.8MB
	gcr.io/k8s-minikube/storage-provisioner   v5                   6e38f40d628db       9.06MB
	k8s.gcr.io/coredns/coredns                v1.8.6               a4ca41631cc7a       13.6MB
	registry.k8s.io/coredns/coredns           v1.8.6               a4ca41631cc7a       13.6MB
	k8s.gcr.io/etcd                           3.5.3-0              aebe758cef4cd       102MB
	registry.k8s.io/etcd                      3.5.3-0              aebe758cef4cd       102MB
	k8s.gcr.io/kube-apiserver                 v1.24.4              6cab9d1bed1be       33.8MB
	registry.k8s.io/kube-apiserver            v1.24.4              6cab9d1bed1be       33.8MB
	k8s.gcr.io/kube-controller-manager        v1.24.4              1f99cb6da9a82       31MB
	registry.k8s.io/kube-controller-manager   v1.24.4              1f99cb6da9a82       31MB
	k8s.gcr.io/kube-proxy                     v1.24.4              7a53d1e08ef58       39.5MB
	registry.k8s.io/kube-proxy                v1.24.4              7a53d1e08ef58       39.5MB
	k8s.gcr.io/kube-scheduler                 v1.24.4              03fa22539fc1c       15.5MB
	registry.k8s.io/kube-scheduler            v1.24.4              03fa22539fc1c       15.5MB
	k8s.gcr.io/pause                          3.7                  221177c6082a8       311kB
	registry.k8s.io/pause                     3.7                  221177c6082a8       311kB

                                                
                                                
-- /stdout --
panic.go:522: *** TestPreload FAILED at 2023-05-24 19:19:53.203224306 +0000 UTC m=+2610.221011729
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-262726 -n test-preload-262726
helpers_test.go:244: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-262726 logs -n 25
helpers_test.go:252: TestPreload logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-053110 ssh -n                                                                 | multinode-053110     | jenkins | v1.30.1 | 24 May 23 18:57 UTC | 24 May 23 18:57 UTC |
	|         | multinode-053110-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-053110 ssh -n multinode-053110 sudo cat                                       | multinode-053110     | jenkins | v1.30.1 | 24 May 23 18:57 UTC | 24 May 23 18:57 UTC |
	|         | /home/docker/cp-test_multinode-053110-m03_multinode-053110.txt                          |                      |         |         |                     |                     |
	| cp      | multinode-053110 cp multinode-053110-m03:/home/docker/cp-test.txt                       | multinode-053110     | jenkins | v1.30.1 | 24 May 23 18:57 UTC | 24 May 23 18:57 UTC |
	|         | multinode-053110-m02:/home/docker/cp-test_multinode-053110-m03_multinode-053110-m02.txt |                      |         |         |                     |                     |
	| ssh     | multinode-053110 ssh -n                                                                 | multinode-053110     | jenkins | v1.30.1 | 24 May 23 18:57 UTC | 24 May 23 18:57 UTC |
	|         | multinode-053110-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-053110 ssh -n multinode-053110-m02 sudo cat                                   | multinode-053110     | jenkins | v1.30.1 | 24 May 23 18:57 UTC | 24 May 23 18:57 UTC |
	|         | /home/docker/cp-test_multinode-053110-m03_multinode-053110-m02.txt                      |                      |         |         |                     |                     |
	| node    | multinode-053110 node stop m03                                                          | multinode-053110     | jenkins | v1.30.1 | 24 May 23 18:57 UTC | 24 May 23 18:57 UTC |
	| node    | multinode-053110 node start                                                             | multinode-053110     | jenkins | v1.30.1 | 24 May 23 18:57 UTC | 24 May 23 18:58 UTC |
	|         | m03 --alsologtostderr                                                                   |                      |         |         |                     |                     |
	| node    | list -p multinode-053110                                                                | multinode-053110     | jenkins | v1.30.1 | 24 May 23 18:58 UTC |                     |
	| stop    | -p multinode-053110                                                                     | multinode-053110     | jenkins | v1.30.1 | 24 May 23 18:58 UTC | 24 May 23 19:01 UTC |
	| start   | -p multinode-053110                                                                     | multinode-053110     | jenkins | v1.30.1 | 24 May 23 19:01 UTC | 24 May 23 19:07 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	| node    | list -p multinode-053110                                                                | multinode-053110     | jenkins | v1.30.1 | 24 May 23 19:07 UTC |                     |
	| node    | multinode-053110 node delete                                                            | multinode-053110     | jenkins | v1.30.1 | 24 May 23 19:07 UTC | 24 May 23 19:07 UTC |
	|         | m03                                                                                     |                      |         |         |                     |                     |
	| stop    | multinode-053110 stop                                                                   | multinode-053110     | jenkins | v1.30.1 | 24 May 23 19:07 UTC | 24 May 23 19:10 UTC |
	| start   | -p multinode-053110                                                                     | multinode-053110     | jenkins | v1.30.1 | 24 May 23 19:10 UTC | 24 May 23 19:14 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=containerd                                                          |                      |         |         |                     |                     |
	| node    | list -p multinode-053110                                                                | multinode-053110     | jenkins | v1.30.1 | 24 May 23 19:14 UTC |                     |
	| start   | -p multinode-053110-m02                                                                 | multinode-053110-m02 | jenkins | v1.30.1 | 24 May 23 19:14 UTC |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=containerd                                                          |                      |         |         |                     |                     |
	| start   | -p multinode-053110-m03                                                                 | multinode-053110-m03 | jenkins | v1.30.1 | 24 May 23 19:14 UTC | 24 May 23 19:14 UTC |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=containerd                                                          |                      |         |         |                     |                     |
	| node    | add -p multinode-053110                                                                 | multinode-053110     | jenkins | v1.30.1 | 24 May 23 19:14 UTC |                     |
	| delete  | -p multinode-053110-m03                                                                 | multinode-053110-m03 | jenkins | v1.30.1 | 24 May 23 19:14 UTC | 24 May 23 19:14 UTC |
	| delete  | -p multinode-053110                                                                     | multinode-053110     | jenkins | v1.30.1 | 24 May 23 19:14 UTC | 24 May 23 19:14 UTC |
	| start   | -p test-preload-262726                                                                  | test-preload-262726  | jenkins | v1.30.1 | 24 May 23 19:14 UTC | 24 May 23 19:16 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                                                           |                      |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=containerd                                                          |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.4                                                            |                      |         |         |                     |                     |
	| ssh     | -p test-preload-262726                                                                  | test-preload-262726  | jenkins | v1.30.1 | 24 May 23 19:16 UTC | 24 May 23 19:16 UTC |
	|         | -- sudo crictl pull                                                                     |                      |         |         |                     |                     |
	|         | gcr.io/k8s-minikube/busybox                                                             |                      |         |         |                     |                     |
	| stop    | -p test-preload-262726                                                                  | test-preload-262726  | jenkins | v1.30.1 | 24 May 23 19:16 UTC | 24 May 23 19:18 UTC |
	| start   | -p test-preload-262726                                                                  | test-preload-262726  | jenkins | v1.30.1 | 24 May 23 19:18 UTC | 24 May 23 19:19 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                  |                      |         |         |                     |                     |
	|         | --wait=true --driver=kvm2                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=containerd                                                          |                      |         |         |                     |                     |
	| ssh     | -p test-preload-262726 -- sudo                                                          | test-preload-262726  | jenkins | v1.30.1 | 24 May 23 19:19 UTC | 24 May 23 19:19 UTC |
	|         | crictl image ls                                                                         |                      |         |         |                     |                     |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/05/24 19:18:29
	Running on machine: ubuntu-20-agent-12
	Binary: Built with gc go1.20.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0524 19:18:29.711769   98716 out.go:296] Setting OutFile to fd 1 ...
	I0524 19:18:29.711901   98716 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0524 19:18:29.711912   98716 out.go:309] Setting ErrFile to fd 2...
	I0524 19:18:29.711919   98716 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0524 19:18:29.712054   98716 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16573-71939/.minikube/bin
	I0524 19:18:29.712522   98716 out.go:303] Setting JSON to false
	I0524 19:18:29.713329   98716 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":10845,"bootTime":1684945065,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1034-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0524 19:18:29.713410   98716 start.go:135] virtualization: kvm guest
	I0524 19:18:29.716721   98716 out.go:177] * [test-preload-262726] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)
	I0524 19:18:29.718796   98716 out.go:177]   - MINIKUBE_LOCATION=16573
	I0524 19:18:29.720581   98716 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0524 19:18:29.718811   98716 notify.go:220] Checking for updates...
	I0524 19:18:29.722263   98716 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16573-71939/kubeconfig
	I0524 19:18:29.724088   98716 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16573-71939/.minikube
	I0524 19:18:29.725796   98716 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0524 19:18:29.727298   98716 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0524 19:18:29.729179   98716 config.go:182] Loaded profile config "test-preload-262726": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.24.4
	I0524 19:18:29.729535   98716 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0524 19:18:29.729574   98716 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0524 19:18:29.744072   98716 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43131
	I0524 19:18:29.744441   98716 main.go:141] libmachine: () Calling .GetVersion
	I0524 19:18:29.745019   98716 main.go:141] libmachine: Using API Version  1
	I0524 19:18:29.745044   98716 main.go:141] libmachine: () Calling .SetConfigRaw
	I0524 19:18:29.745374   98716 main.go:141] libmachine: () Calling .GetMachineName
	I0524 19:18:29.745569   98716 main.go:141] libmachine: (test-preload-262726) Calling .DriverName
	I0524 19:18:29.747440   98716 out.go:177] * Kubernetes 1.27.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.27.2
	I0524 19:18:29.749033   98716 driver.go:375] Setting default libvirt URI to qemu:///system
	I0524 19:18:29.749308   98716 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0524 19:18:29.749362   98716 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0524 19:18:29.763201   98716 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44745
	I0524 19:18:29.763492   98716 main.go:141] libmachine: () Calling .GetVersion
	I0524 19:18:29.763902   98716 main.go:141] libmachine: Using API Version  1
	I0524 19:18:29.763923   98716 main.go:141] libmachine: () Calling .SetConfigRaw
	I0524 19:18:29.764201   98716 main.go:141] libmachine: () Calling .GetMachineName
	I0524 19:18:29.764358   98716 main.go:141] libmachine: (test-preload-262726) Calling .DriverName
	I0524 19:18:29.797067   98716 out.go:177] * Using the kvm2 driver based on existing profile
	I0524 19:18:29.798689   98716 start.go:295] selected driver: kvm2
	I0524 19:18:29.798704   98716 start.go:870] validating driver "kvm2" against &{Name:test-preload-262726 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16501/minikube-v1.30.1-1684536668-16501-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684536746-16501@sha256:f5d93abf1d1cfb142a7cf0b58b24029595d621e5f943105b16c61199094d77de Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-262726 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.12 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/mini
kube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0524 19:18:29.798823   98716 start.go:881] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0524 19:18:29.799422   98716 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0524 19:18:29.799497   98716 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/16573-71939/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0524 19:18:29.812802   98716 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.30.1
	I0524 19:18:29.813107   98716 start_flags.go:915] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0524 19:18:29.813140   98716 cni.go:84] Creating CNI manager for ""
	I0524 19:18:29.813151   98716 cni.go:145] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0524 19:18:29.813164   98716 start_flags.go:319] config:
	{Name:test-preload-262726 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16501/minikube-v1.30.1-1684536668-16501-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684536746-16501@sha256:f5d93abf1d1cfb142a7cf0b58b24029595d621e5f943105b16c61199094d77de Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-262726 Namespace:defaul
t APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.12 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 M
ountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0524 19:18:29.813271   98716 iso.go:125] acquiring lock: {Name:mk070acfedcbbaf2c11bfabff12ffb52c449689f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0524 19:18:29.815369   98716 out.go:177] * Starting control plane node test-preload-262726 in cluster test-preload-262726
	I0524 19:18:29.816992   98716 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime containerd
	I0524 19:18:29.842970   98716 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-containerd-overlay2-amd64.tar.lz4
	I0524 19:18:29.843000   98716 cache.go:57] Caching tarball of preloaded images
	I0524 19:18:29.843117   98716 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime containerd
	I0524 19:18:29.845010   98716 out.go:177] * Downloading Kubernetes v1.24.4 preload ...
	I0524 19:18:29.846518   98716 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.24.4-containerd-overlay2-amd64.tar.lz4 ...
	I0524 19:18:29.883002   98716 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-containerd-overlay2-amd64.tar.lz4?checksum=md5:41d292e9d8b8bb8fdf3bc94dc3c43bf0 -> /home/jenkins/minikube-integration/16573-71939/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-containerd-overlay2-amd64.tar.lz4
	I0524 19:18:34.602043   98716 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.24.4-containerd-overlay2-amd64.tar.lz4 ...
	I0524 19:18:34.602125   98716 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/16573-71939/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-containerd-overlay2-amd64.tar.lz4 ...
	I0524 19:18:35.465495   98716 cache.go:60] Finished verifying existence of preloaded tar for  v1.24.4 on containerd
	I0524 19:18:35.465636   98716 profile.go:148] Saving config to /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/test-preload-262726/config.json ...
	I0524 19:18:35.465874   98716 cache.go:195] Successfully downloaded all kic artifacts
	I0524 19:18:35.465904   98716 start.go:364] acquiring machines lock for test-preload-262726: {Name:mk7d4981ff7dce8da894e9fe23513f11c9471c1a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0524 19:18:35.465969   98716 start.go:368] acquired machines lock for "test-preload-262726" in 49.087µs
	I0524 19:18:35.465985   98716 start.go:96] Skipping create...Using existing machine configuration
	I0524 19:18:35.465991   98716 fix.go:55] fixHost starting: 
	I0524 19:18:35.466259   98716 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0524 19:18:35.466296   98716 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0524 19:18:35.480633   98716 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43115
	I0524 19:18:35.481045   98716 main.go:141] libmachine: () Calling .GetVersion
	I0524 19:18:35.481565   98716 main.go:141] libmachine: Using API Version  1
	I0524 19:18:35.481589   98716 main.go:141] libmachine: () Calling .SetConfigRaw
	I0524 19:18:35.481955   98716 main.go:141] libmachine: () Calling .GetMachineName
	I0524 19:18:35.482153   98716 main.go:141] libmachine: (test-preload-262726) Calling .DriverName
	I0524 19:18:35.482319   98716 main.go:141] libmachine: (test-preload-262726) Calling .GetState
	I0524 19:18:35.484046   98716 fix.go:103] recreateIfNeeded on test-preload-262726: state=Stopped err=<nil>
	I0524 19:18:35.484073   98716 main.go:141] libmachine: (test-preload-262726) Calling .DriverName
	W0524 19:18:35.484245   98716 fix.go:129] unexpected machine state, will restart: <nil>
	I0524 19:18:35.487766   98716 out.go:177] * Restarting existing kvm2 VM for "test-preload-262726" ...
	I0524 19:18:35.489471   98716 main.go:141] libmachine: (test-preload-262726) Calling .Start
	I0524 19:18:35.489627   98716 main.go:141] libmachine: (test-preload-262726) Ensuring networks are active...
	I0524 19:18:35.490325   98716 main.go:141] libmachine: (test-preload-262726) Ensuring network default is active
	I0524 19:18:35.490639   98716 main.go:141] libmachine: (test-preload-262726) Ensuring network mk-test-preload-262726 is active
	I0524 19:18:35.490962   98716 main.go:141] libmachine: (test-preload-262726) Getting domain xml...
	I0524 19:18:35.491640   98716 main.go:141] libmachine: (test-preload-262726) Creating domain...
	I0524 19:18:36.666074   98716 main.go:141] libmachine: (test-preload-262726) Waiting to get IP...
	I0524 19:18:36.666949   98716 main.go:141] libmachine: (test-preload-262726) DBG | domain test-preload-262726 has defined MAC address 52:54:00:2c:ce:d5 in network mk-test-preload-262726
	I0524 19:18:36.667317   98716 main.go:141] libmachine: (test-preload-262726) DBG | unable to find current IP address of domain test-preload-262726 in network mk-test-preload-262726
	I0524 19:18:36.667422   98716 main.go:141] libmachine: (test-preload-262726) DBG | I0524 19:18:36.667310   98763 retry.go:31] will retry after 297.185972ms: waiting for machine to come up
	I0524 19:18:36.965858   98716 main.go:141] libmachine: (test-preload-262726) DBG | domain test-preload-262726 has defined MAC address 52:54:00:2c:ce:d5 in network mk-test-preload-262726
	I0524 19:18:36.966308   98716 main.go:141] libmachine: (test-preload-262726) DBG | unable to find current IP address of domain test-preload-262726 in network mk-test-preload-262726
	I0524 19:18:36.966340   98716 main.go:141] libmachine: (test-preload-262726) DBG | I0524 19:18:36.966250   98763 retry.go:31] will retry after 261.717109ms: waiting for machine to come up
	I0524 19:18:37.229823   98716 main.go:141] libmachine: (test-preload-262726) DBG | domain test-preload-262726 has defined MAC address 52:54:00:2c:ce:d5 in network mk-test-preload-262726
	I0524 19:18:37.230184   98716 main.go:141] libmachine: (test-preload-262726) DBG | unable to find current IP address of domain test-preload-262726 in network mk-test-preload-262726
	I0524 19:18:37.230218   98716 main.go:141] libmachine: (test-preload-262726) DBG | I0524 19:18:37.230123   98763 retry.go:31] will retry after 375.565654ms: waiting for machine to come up
	I0524 19:18:37.607318   98716 main.go:141] libmachine: (test-preload-262726) DBG | domain test-preload-262726 has defined MAC address 52:54:00:2c:ce:d5 in network mk-test-preload-262726
	I0524 19:18:37.607766   98716 main.go:141] libmachine: (test-preload-262726) DBG | unable to find current IP address of domain test-preload-262726 in network mk-test-preload-262726
	I0524 19:18:37.607797   98716 main.go:141] libmachine: (test-preload-262726) DBG | I0524 19:18:37.607722   98763 retry.go:31] will retry after 384.604355ms: waiting for machine to come up
	I0524 19:18:37.994372   98716 main.go:141] libmachine: (test-preload-262726) DBG | domain test-preload-262726 has defined MAC address 52:54:00:2c:ce:d5 in network mk-test-preload-262726
	I0524 19:18:37.994877   98716 main.go:141] libmachine: (test-preload-262726) DBG | unable to find current IP address of domain test-preload-262726 in network mk-test-preload-262726
	I0524 19:18:37.994910   98716 main.go:141] libmachine: (test-preload-262726) DBG | I0524 19:18:37.994816   98763 retry.go:31] will retry after 487.001687ms: waiting for machine to come up
	I0524 19:18:38.483495   98716 main.go:141] libmachine: (test-preload-262726) DBG | domain test-preload-262726 has defined MAC address 52:54:00:2c:ce:d5 in network mk-test-preload-262726
	I0524 19:18:38.483968   98716 main.go:141] libmachine: (test-preload-262726) DBG | unable to find current IP address of domain test-preload-262726 in network mk-test-preload-262726
	I0524 19:18:38.483999   98716 main.go:141] libmachine: (test-preload-262726) DBG | I0524 19:18:38.483897   98763 retry.go:31] will retry after 824.382758ms: waiting for machine to come up
	I0524 19:18:39.309988   98716 main.go:141] libmachine: (test-preload-262726) DBG | domain test-preload-262726 has defined MAC address 52:54:00:2c:ce:d5 in network mk-test-preload-262726
	I0524 19:18:39.310430   98716 main.go:141] libmachine: (test-preload-262726) DBG | unable to find current IP address of domain test-preload-262726 in network mk-test-preload-262726
	I0524 19:18:39.310462   98716 main.go:141] libmachine: (test-preload-262726) DBG | I0524 19:18:39.310374   98763 retry.go:31] will retry after 1.07159268s: waiting for machine to come up
	I0524 19:18:40.383490   98716 main.go:141] libmachine: (test-preload-262726) DBG | domain test-preload-262726 has defined MAC address 52:54:00:2c:ce:d5 in network mk-test-preload-262726
	I0524 19:18:40.383886   98716 main.go:141] libmachine: (test-preload-262726) DBG | unable to find current IP address of domain test-preload-262726 in network mk-test-preload-262726
	I0524 19:18:40.383922   98716 main.go:141] libmachine: (test-preload-262726) DBG | I0524 19:18:40.383821   98763 retry.go:31] will retry after 985.49653ms: waiting for machine to come up
	I0524 19:18:41.371426   98716 main.go:141] libmachine: (test-preload-262726) DBG | domain test-preload-262726 has defined MAC address 52:54:00:2c:ce:d5 in network mk-test-preload-262726
	I0524 19:18:41.371846   98716 main.go:141] libmachine: (test-preload-262726) DBG | unable to find current IP address of domain test-preload-262726 in network mk-test-preload-262726
	I0524 19:18:41.371870   98716 main.go:141] libmachine: (test-preload-262726) DBG | I0524 19:18:41.371796   98763 retry.go:31] will retry after 1.173158795s: waiting for machine to come up
	I0524 19:18:42.546507   98716 main.go:141] libmachine: (test-preload-262726) DBG | domain test-preload-262726 has defined MAC address 52:54:00:2c:ce:d5 in network mk-test-preload-262726
	I0524 19:18:42.546918   98716 main.go:141] libmachine: (test-preload-262726) DBG | unable to find current IP address of domain test-preload-262726 in network mk-test-preload-262726
	I0524 19:18:42.546954   98716 main.go:141] libmachine: (test-preload-262726) DBG | I0524 19:18:42.546861   98763 retry.go:31] will retry after 1.551327554s: waiting for machine to come up
	I0524 19:18:44.100729   98716 main.go:141] libmachine: (test-preload-262726) DBG | domain test-preload-262726 has defined MAC address 52:54:00:2c:ce:d5 in network mk-test-preload-262726
	I0524 19:18:44.101259   98716 main.go:141] libmachine: (test-preload-262726) DBG | unable to find current IP address of domain test-preload-262726 in network mk-test-preload-262726
	I0524 19:18:44.101298   98716 main.go:141] libmachine: (test-preload-262726) DBG | I0524 19:18:44.101167   98763 retry.go:31] will retry after 1.783324652s: waiting for machine to come up
	I0524 19:18:45.886671   98716 main.go:141] libmachine: (test-preload-262726) DBG | domain test-preload-262726 has defined MAC address 52:54:00:2c:ce:d5 in network mk-test-preload-262726
	I0524 19:18:45.887101   98716 main.go:141] libmachine: (test-preload-262726) DBG | unable to find current IP address of domain test-preload-262726 in network mk-test-preload-262726
	I0524 19:18:45.887124   98716 main.go:141] libmachine: (test-preload-262726) DBG | I0524 19:18:45.887054   98763 retry.go:31] will retry after 3.324585975s: waiting for machine to come up
	I0524 19:18:49.215422   98716 main.go:141] libmachine: (test-preload-262726) DBG | domain test-preload-262726 has defined MAC address 52:54:00:2c:ce:d5 in network mk-test-preload-262726
	I0524 19:18:49.215775   98716 main.go:141] libmachine: (test-preload-262726) DBG | unable to find current IP address of domain test-preload-262726 in network mk-test-preload-262726
	I0524 19:18:49.215797   98716 main.go:141] libmachine: (test-preload-262726) DBG | I0524 19:18:49.215727   98763 retry.go:31] will retry after 3.488218878s: waiting for machine to come up
	I0524 19:18:52.706357   98716 main.go:141] libmachine: (test-preload-262726) DBG | domain test-preload-262726 has defined MAC address 52:54:00:2c:ce:d5 in network mk-test-preload-262726
	I0524 19:18:52.706848   98716 main.go:141] libmachine: (test-preload-262726) Found IP for machine: 192.168.39.12
	I0524 19:18:52.706867   98716 main.go:141] libmachine: (test-preload-262726) Reserving static IP address...
	I0524 19:18:52.706879   98716 main.go:141] libmachine: (test-preload-262726) DBG | domain test-preload-262726 has current primary IP address 192.168.39.12 and MAC address 52:54:00:2c:ce:d5 in network mk-test-preload-262726
	I0524 19:18:52.707330   98716 main.go:141] libmachine: (test-preload-262726) DBG | found host DHCP lease matching {name: "test-preload-262726", mac: "52:54:00:2c:ce:d5", ip: "192.168.39.12"} in network mk-test-preload-262726: {Iface:virbr1 ExpiryTime:2023-05-24 20:18:47 +0000 UTC Type:0 Mac:52:54:00:2c:ce:d5 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:test-preload-262726 Clientid:01:52:54:00:2c:ce:d5}
	I0524 19:18:52.707366   98716 main.go:141] libmachine: (test-preload-262726) DBG | skip adding static IP to network mk-test-preload-262726 - found existing host DHCP lease matching {name: "test-preload-262726", mac: "52:54:00:2c:ce:d5", ip: "192.168.39.12"}
	I0524 19:18:52.707383   98716 main.go:141] libmachine: (test-preload-262726) Reserved static IP address: 192.168.39.12
	I0524 19:18:52.707413   98716 main.go:141] libmachine: (test-preload-262726) Waiting for SSH to be available...
	I0524 19:18:52.707445   98716 main.go:141] libmachine: (test-preload-262726) DBG | Getting to WaitForSSH function...
	I0524 19:18:52.709527   98716 main.go:141] libmachine: (test-preload-262726) DBG | domain test-preload-262726 has defined MAC address 52:54:00:2c:ce:d5 in network mk-test-preload-262726
	I0524 19:18:52.709795   98716 main.go:141] libmachine: (test-preload-262726) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:ce:d5", ip: ""} in network mk-test-preload-262726: {Iface:virbr1 ExpiryTime:2023-05-24 20:18:47 +0000 UTC Type:0 Mac:52:54:00:2c:ce:d5 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:test-preload-262726 Clientid:01:52:54:00:2c:ce:d5}
	I0524 19:18:52.709827   98716 main.go:141] libmachine: (test-preload-262726) DBG | domain test-preload-262726 has defined IP address 192.168.39.12 and MAC address 52:54:00:2c:ce:d5 in network mk-test-preload-262726
	I0524 19:18:52.709896   98716 main.go:141] libmachine: (test-preload-262726) DBG | Using SSH client type: external
	I0524 19:18:52.709939   98716 main.go:141] libmachine: (test-preload-262726) DBG | Using SSH private key: /home/jenkins/minikube-integration/16573-71939/.minikube/machines/test-preload-262726/id_rsa (-rw-------)
	I0524 19:18:52.709967   98716 main.go:141] libmachine: (test-preload-262726) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.12 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16573-71939/.minikube/machines/test-preload-262726/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0524 19:18:52.709980   98716 main.go:141] libmachine: (test-preload-262726) DBG | About to run SSH command:
	I0524 19:18:52.709992   98716 main.go:141] libmachine: (test-preload-262726) DBG | exit 0
	I0524 19:18:52.796422   98716 main.go:141] libmachine: (test-preload-262726) DBG | SSH cmd err, output: <nil>: 
	I0524 19:18:52.796747   98716 main.go:141] libmachine: (test-preload-262726) Calling .GetConfigRaw
	I0524 19:18:52.797377   98716 main.go:141] libmachine: (test-preload-262726) Calling .GetIP
	I0524 19:18:52.799667   98716 main.go:141] libmachine: (test-preload-262726) DBG | domain test-preload-262726 has defined MAC address 52:54:00:2c:ce:d5 in network mk-test-preload-262726
	I0524 19:18:52.799999   98716 main.go:141] libmachine: (test-preload-262726) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:ce:d5", ip: ""} in network mk-test-preload-262726: {Iface:virbr1 ExpiryTime:2023-05-24 20:18:47 +0000 UTC Type:0 Mac:52:54:00:2c:ce:d5 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:test-preload-262726 Clientid:01:52:54:00:2c:ce:d5}
	I0524 19:18:52.800032   98716 main.go:141] libmachine: (test-preload-262726) DBG | domain test-preload-262726 has defined IP address 192.168.39.12 and MAC address 52:54:00:2c:ce:d5 in network mk-test-preload-262726
	I0524 19:18:52.800218   98716 profile.go:148] Saving config to /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/test-preload-262726/config.json ...
	I0524 19:18:52.800406   98716 machine.go:88] provisioning docker machine ...
	I0524 19:18:52.800441   98716 main.go:141] libmachine: (test-preload-262726) Calling .DriverName
	I0524 19:18:52.800657   98716 main.go:141] libmachine: (test-preload-262726) Calling .GetMachineName
	I0524 19:18:52.800840   98716 buildroot.go:166] provisioning hostname "test-preload-262726"
	I0524 19:18:52.800860   98716 main.go:141] libmachine: (test-preload-262726) Calling .GetMachineName
	I0524 19:18:52.800989   98716 main.go:141] libmachine: (test-preload-262726) Calling .GetSSHHostname
	I0524 19:18:52.802923   98716 main.go:141] libmachine: (test-preload-262726) DBG | domain test-preload-262726 has defined MAC address 52:54:00:2c:ce:d5 in network mk-test-preload-262726
	I0524 19:18:52.803209   98716 main.go:141] libmachine: (test-preload-262726) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:ce:d5", ip: ""} in network mk-test-preload-262726: {Iface:virbr1 ExpiryTime:2023-05-24 20:18:47 +0000 UTC Type:0 Mac:52:54:00:2c:ce:d5 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:test-preload-262726 Clientid:01:52:54:00:2c:ce:d5}
	I0524 19:18:52.803236   98716 main.go:141] libmachine: (test-preload-262726) DBG | domain test-preload-262726 has defined IP address 192.168.39.12 and MAC address 52:54:00:2c:ce:d5 in network mk-test-preload-262726
	I0524 19:18:52.803304   98716 main.go:141] libmachine: (test-preload-262726) Calling .GetSSHPort
	I0524 19:18:52.803477   98716 main.go:141] libmachine: (test-preload-262726) Calling .GetSSHKeyPath
	I0524 19:18:52.803634   98716 main.go:141] libmachine: (test-preload-262726) Calling .GetSSHKeyPath
	I0524 19:18:52.803789   98716 main.go:141] libmachine: (test-preload-262726) Calling .GetSSHUsername
	I0524 19:18:52.803946   98716 main.go:141] libmachine: Using SSH client type: native
	I0524 19:18:52.804367   98716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80dc80] 0x810d20 <nil>  [] 0s} 192.168.39.12 22 <nil> <nil>}
	I0524 19:18:52.804379   98716 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-262726 && echo "test-preload-262726" | sudo tee /etc/hostname
	I0524 19:18:52.937910   98716 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-262726
	
	I0524 19:18:52.937934   98716 main.go:141] libmachine: (test-preload-262726) Calling .GetSSHHostname
	I0524 19:18:52.940161   98716 main.go:141] libmachine: (test-preload-262726) DBG | domain test-preload-262726 has defined MAC address 52:54:00:2c:ce:d5 in network mk-test-preload-262726
	I0524 19:18:52.940453   98716 main.go:141] libmachine: (test-preload-262726) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:ce:d5", ip: ""} in network mk-test-preload-262726: {Iface:virbr1 ExpiryTime:2023-05-24 20:18:47 +0000 UTC Type:0 Mac:52:54:00:2c:ce:d5 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:test-preload-262726 Clientid:01:52:54:00:2c:ce:d5}
	I0524 19:18:52.940484   98716 main.go:141] libmachine: (test-preload-262726) DBG | domain test-preload-262726 has defined IP address 192.168.39.12 and MAC address 52:54:00:2c:ce:d5 in network mk-test-preload-262726
	I0524 19:18:52.940652   98716 main.go:141] libmachine: (test-preload-262726) Calling .GetSSHPort
	I0524 19:18:52.940815   98716 main.go:141] libmachine: (test-preload-262726) Calling .GetSSHKeyPath
	I0524 19:18:52.940976   98716 main.go:141] libmachine: (test-preload-262726) Calling .GetSSHKeyPath
	I0524 19:18:52.941115   98716 main.go:141] libmachine: (test-preload-262726) Calling .GetSSHUsername
	I0524 19:18:52.941288   98716 main.go:141] libmachine: Using SSH client type: native
	I0524 19:18:52.941720   98716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80dc80] 0x810d20 <nil>  [] 0s} 192.168.39.12 22 <nil> <nil>}
	I0524 19:18:52.941740   98716 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-262726' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-262726/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-262726' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0524 19:18:53.064537   98716 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0524 19:18:53.064559   98716 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16573-71939/.minikube CaCertPath:/home/jenkins/minikube-integration/16573-71939/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16573-71939/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16573-71939/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16573-71939/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16573-71939/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16573-71939/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16573-71939/.minikube}
	I0524 19:18:53.064578   98716 buildroot.go:174] setting up certificates
	I0524 19:18:53.064588   98716 provision.go:83] configureAuth start
	I0524 19:18:53.064597   98716 main.go:141] libmachine: (test-preload-262726) Calling .GetMachineName
	I0524 19:18:53.064824   98716 main.go:141] libmachine: (test-preload-262726) Calling .GetIP
	I0524 19:18:53.066716   98716 main.go:141] libmachine: (test-preload-262726) DBG | domain test-preload-262726 has defined MAC address 52:54:00:2c:ce:d5 in network mk-test-preload-262726
	I0524 19:18:53.067009   98716 main.go:141] libmachine: (test-preload-262726) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:ce:d5", ip: ""} in network mk-test-preload-262726: {Iface:virbr1 ExpiryTime:2023-05-24 20:18:47 +0000 UTC Type:0 Mac:52:54:00:2c:ce:d5 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:test-preload-262726 Clientid:01:52:54:00:2c:ce:d5}
	I0524 19:18:53.067040   98716 main.go:141] libmachine: (test-preload-262726) DBG | domain test-preload-262726 has defined IP address 192.168.39.12 and MAC address 52:54:00:2c:ce:d5 in network mk-test-preload-262726
	I0524 19:18:53.067190   98716 main.go:141] libmachine: (test-preload-262726) Calling .GetSSHHostname
	I0524 19:18:53.068978   98716 main.go:141] libmachine: (test-preload-262726) DBG | domain test-preload-262726 has defined MAC address 52:54:00:2c:ce:d5 in network mk-test-preload-262726
	I0524 19:18:53.069260   98716 main.go:141] libmachine: (test-preload-262726) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:ce:d5", ip: ""} in network mk-test-preload-262726: {Iface:virbr1 ExpiryTime:2023-05-24 20:18:47 +0000 UTC Type:0 Mac:52:54:00:2c:ce:d5 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:test-preload-262726 Clientid:01:52:54:00:2c:ce:d5}
	I0524 19:18:53.069294   98716 main.go:141] libmachine: (test-preload-262726) DBG | domain test-preload-262726 has defined IP address 192.168.39.12 and MAC address 52:54:00:2c:ce:d5 in network mk-test-preload-262726
	I0524 19:18:53.069385   98716 provision.go:138] copyHostCerts
	I0524 19:18:53.069444   98716 exec_runner.go:144] found /home/jenkins/minikube-integration/16573-71939/.minikube/key.pem, removing ...
	I0524 19:18:53.069453   98716 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16573-71939/.minikube/key.pem
	I0524 19:18:53.069519   98716 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16573-71939/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16573-71939/.minikube/key.pem (1679 bytes)
	I0524 19:18:53.069618   98716 exec_runner.go:144] found /home/jenkins/minikube-integration/16573-71939/.minikube/ca.pem, removing ...
	I0524 19:18:53.069633   98716 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16573-71939/.minikube/ca.pem
	I0524 19:18:53.069666   98716 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16573-71939/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16573-71939/.minikube/ca.pem (1078 bytes)
	I0524 19:18:53.069742   98716 exec_runner.go:144] found /home/jenkins/minikube-integration/16573-71939/.minikube/cert.pem, removing ...
	I0524 19:18:53.069751   98716 exec_runner.go:203] rm: /home/jenkins/minikube-integration/16573-71939/.minikube/cert.pem
	I0524 19:18:53.069781   98716 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16573-71939/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16573-71939/.minikube/cert.pem (1123 bytes)
	I0524 19:18:53.069893   98716 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16573-71939/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16573-71939/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16573-71939/.minikube/certs/ca-key.pem org=jenkins.test-preload-262726 san=[192.168.39.12 192.168.39.12 localhost 127.0.0.1 minikube test-preload-262726]
	I0524 19:18:53.148825   98716 provision.go:172] copyRemoteCerts
	I0524 19:18:53.148886   98716 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0524 19:18:53.148914   98716 main.go:141] libmachine: (test-preload-262726) Calling .GetSSHHostname
	I0524 19:18:53.150893   98716 main.go:141] libmachine: (test-preload-262726) DBG | domain test-preload-262726 has defined MAC address 52:54:00:2c:ce:d5 in network mk-test-preload-262726
	I0524 19:18:53.151180   98716 main.go:141] libmachine: (test-preload-262726) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:ce:d5", ip: ""} in network mk-test-preload-262726: {Iface:virbr1 ExpiryTime:2023-05-24 20:18:47 +0000 UTC Type:0 Mac:52:54:00:2c:ce:d5 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:test-preload-262726 Clientid:01:52:54:00:2c:ce:d5}
	I0524 19:18:53.151213   98716 main.go:141] libmachine: (test-preload-262726) DBG | domain test-preload-262726 has defined IP address 192.168.39.12 and MAC address 52:54:00:2c:ce:d5 in network mk-test-preload-262726
	I0524 19:18:53.151320   98716 main.go:141] libmachine: (test-preload-262726) Calling .GetSSHPort
	I0524 19:18:53.151478   98716 main.go:141] libmachine: (test-preload-262726) Calling .GetSSHKeyPath
	I0524 19:18:53.151600   98716 main.go:141] libmachine: (test-preload-262726) Calling .GetSSHUsername
	I0524 19:18:53.151709   98716 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16573-71939/.minikube/machines/test-preload-262726/id_rsa Username:docker}
	I0524 19:18:53.238277   98716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16573-71939/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0524 19:18:53.259310   98716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16573-71939/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0524 19:18:53.279833   98716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16573-71939/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0524 19:18:53.300144   98716 provision.go:86] duration metric: configureAuth took 235.54757ms
	I0524 19:18:53.300157   98716 buildroot.go:189] setting minikube options for container-runtime
	I0524 19:18:53.300316   98716 config.go:182] Loaded profile config "test-preload-262726": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.24.4
	I0524 19:18:53.300329   98716 machine.go:91] provisioned docker machine in 499.910757ms
	I0524 19:18:53.300336   98716 start.go:300] post-start starting for "test-preload-262726" (driver="kvm2")
	I0524 19:18:53.300343   98716 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0524 19:18:53.300378   98716 main.go:141] libmachine: (test-preload-262726) Calling .DriverName
	I0524 19:18:53.300598   98716 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0524 19:18:53.300624   98716 main.go:141] libmachine: (test-preload-262726) Calling .GetSSHHostname
	I0524 19:18:53.302734   98716 main.go:141] libmachine: (test-preload-262726) DBG | domain test-preload-262726 has defined MAC address 52:54:00:2c:ce:d5 in network mk-test-preload-262726
	I0524 19:18:53.303044   98716 main.go:141] libmachine: (test-preload-262726) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:ce:d5", ip: ""} in network mk-test-preload-262726: {Iface:virbr1 ExpiryTime:2023-05-24 20:18:47 +0000 UTC Type:0 Mac:52:54:00:2c:ce:d5 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:test-preload-262726 Clientid:01:52:54:00:2c:ce:d5}
	I0524 19:18:53.303070   98716 main.go:141] libmachine: (test-preload-262726) DBG | domain test-preload-262726 has defined IP address 192.168.39.12 and MAC address 52:54:00:2c:ce:d5 in network mk-test-preload-262726
	I0524 19:18:53.303218   98716 main.go:141] libmachine: (test-preload-262726) Calling .GetSSHPort
	I0524 19:18:53.303384   98716 main.go:141] libmachine: (test-preload-262726) Calling .GetSSHKeyPath
	I0524 19:18:53.303576   98716 main.go:141] libmachine: (test-preload-262726) Calling .GetSSHUsername
	I0524 19:18:53.303730   98716 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16573-71939/.minikube/machines/test-preload-262726/id_rsa Username:docker}
	I0524 19:18:53.389780   98716 ssh_runner.go:195] Run: cat /etc/os-release
	I0524 19:18:53.393476   98716 info.go:137] Remote host: Buildroot 2021.02.12
	I0524 19:18:53.393493   98716 filesync.go:126] Scanning /home/jenkins/minikube-integration/16573-71939/.minikube/addons for local assets ...
	I0524 19:18:53.393541   98716 filesync.go:126] Scanning /home/jenkins/minikube-integration/16573-71939/.minikube/files for local assets ...
	I0524 19:18:53.393640   98716 filesync.go:149] local asset: /home/jenkins/minikube-integration/16573-71939/.minikube/files/etc/ssl/certs/791532.pem -> 791532.pem in /etc/ssl/certs
	I0524 19:18:53.393740   98716 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0524 19:18:53.402117   98716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16573-71939/.minikube/files/etc/ssl/certs/791532.pem --> /etc/ssl/certs/791532.pem (1708 bytes)
	I0524 19:18:53.422540   98716 start.go:303] post-start completed in 122.193902ms
	I0524 19:18:53.422558   98716 fix.go:57] fixHost completed within 17.956566594s
	I0524 19:18:53.422574   98716 main.go:141] libmachine: (test-preload-262726) Calling .GetSSHHostname
	I0524 19:18:53.424632   98716 main.go:141] libmachine: (test-preload-262726) DBG | domain test-preload-262726 has defined MAC address 52:54:00:2c:ce:d5 in network mk-test-preload-262726
	I0524 19:18:53.424917   98716 main.go:141] libmachine: (test-preload-262726) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:ce:d5", ip: ""} in network mk-test-preload-262726: {Iface:virbr1 ExpiryTime:2023-05-24 20:18:47 +0000 UTC Type:0 Mac:52:54:00:2c:ce:d5 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:test-preload-262726 Clientid:01:52:54:00:2c:ce:d5}
	I0524 19:18:53.424949   98716 main.go:141] libmachine: (test-preload-262726) DBG | domain test-preload-262726 has defined IP address 192.168.39.12 and MAC address 52:54:00:2c:ce:d5 in network mk-test-preload-262726
	I0524 19:18:53.425104   98716 main.go:141] libmachine: (test-preload-262726) Calling .GetSSHPort
	I0524 19:18:53.425282   98716 main.go:141] libmachine: (test-preload-262726) Calling .GetSSHKeyPath
	I0524 19:18:53.425445   98716 main.go:141] libmachine: (test-preload-262726) Calling .GetSSHKeyPath
	I0524 19:18:53.425630   98716 main.go:141] libmachine: (test-preload-262726) Calling .GetSSHUsername
	I0524 19:18:53.425792   98716 main.go:141] libmachine: Using SSH client type: native
	I0524 19:18:53.426173   98716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80dc80] 0x810d20 <nil>  [] 0s} 192.168.39.12 22 <nil> <nil>}
	I0524 19:18:53.426184   98716 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0524 19:18:53.541180   98716 main.go:141] libmachine: SSH cmd err, output: <nil>: 1684955933.490809786
	
	I0524 19:18:53.541202   98716 fix.go:207] guest clock: 1684955933.490809786
	I0524 19:18:53.541211   98716 fix.go:220] Guest: 2023-05-24 19:18:53.490809786 +0000 UTC Remote: 2023-05-24 19:18:53.422562549 +0000 UTC m=+23.742028532 (delta=68.247237ms)
	I0524 19:18:53.541240   98716 fix.go:191] guest clock delta is within tolerance: 68.247237ms
	I0524 19:18:53.541247   98716 start.go:83] releasing machines lock for "test-preload-262726", held for 18.07526583s
	I0524 19:18:53.541272   98716 main.go:141] libmachine: (test-preload-262726) Calling .DriverName
	I0524 19:18:53.541465   98716 main.go:141] libmachine: (test-preload-262726) Calling .GetIP
	I0524 19:18:53.543460   98716 main.go:141] libmachine: (test-preload-262726) DBG | domain test-preload-262726 has defined MAC address 52:54:00:2c:ce:d5 in network mk-test-preload-262726
	I0524 19:18:53.543786   98716 main.go:141] libmachine: (test-preload-262726) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:ce:d5", ip: ""} in network mk-test-preload-262726: {Iface:virbr1 ExpiryTime:2023-05-24 20:18:47 +0000 UTC Type:0 Mac:52:54:00:2c:ce:d5 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:test-preload-262726 Clientid:01:52:54:00:2c:ce:d5}
	I0524 19:18:53.543817   98716 main.go:141] libmachine: (test-preload-262726) DBG | domain test-preload-262726 has defined IP address 192.168.39.12 and MAC address 52:54:00:2c:ce:d5 in network mk-test-preload-262726
	I0524 19:18:53.543926   98716 main.go:141] libmachine: (test-preload-262726) Calling .DriverName
	I0524 19:18:53.544310   98716 main.go:141] libmachine: (test-preload-262726) Calling .DriverName
	I0524 19:18:53.544476   98716 main.go:141] libmachine: (test-preload-262726) Calling .DriverName
	I0524 19:18:53.544587   98716 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0524 19:18:53.544642   98716 main.go:141] libmachine: (test-preload-262726) Calling .GetSSHHostname
	I0524 19:18:53.544691   98716 ssh_runner.go:195] Run: cat /version.json
	I0524 19:18:53.544711   98716 main.go:141] libmachine: (test-preload-262726) Calling .GetSSHHostname
	I0524 19:18:53.547094   98716 main.go:141] libmachine: (test-preload-262726) DBG | domain test-preload-262726 has defined MAC address 52:54:00:2c:ce:d5 in network mk-test-preload-262726
	I0524 19:18:53.547376   98716 main.go:141] libmachine: (test-preload-262726) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:ce:d5", ip: ""} in network mk-test-preload-262726: {Iface:virbr1 ExpiryTime:2023-05-24 20:18:47 +0000 UTC Type:0 Mac:52:54:00:2c:ce:d5 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:test-preload-262726 Clientid:01:52:54:00:2c:ce:d5}
	I0524 19:18:53.547407   98716 main.go:141] libmachine: (test-preload-262726) DBG | domain test-preload-262726 has defined MAC address 52:54:00:2c:ce:d5 in network mk-test-preload-262726
	I0524 19:18:53.547432   98716 main.go:141] libmachine: (test-preload-262726) DBG | domain test-preload-262726 has defined IP address 192.168.39.12 and MAC address 52:54:00:2c:ce:d5 in network mk-test-preload-262726
	I0524 19:18:53.547554   98716 main.go:141] libmachine: (test-preload-262726) Calling .GetSSHPort
	I0524 19:18:53.547729   98716 main.go:141] libmachine: (test-preload-262726) Calling .GetSSHKeyPath
	I0524 19:18:53.547753   98716 main.go:141] libmachine: (test-preload-262726) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:ce:d5", ip: ""} in network mk-test-preload-262726: {Iface:virbr1 ExpiryTime:2023-05-24 20:18:47 +0000 UTC Type:0 Mac:52:54:00:2c:ce:d5 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:test-preload-262726 Clientid:01:52:54:00:2c:ce:d5}
	I0524 19:18:53.547783   98716 main.go:141] libmachine: (test-preload-262726) DBG | domain test-preload-262726 has defined IP address 192.168.39.12 and MAC address 52:54:00:2c:ce:d5 in network mk-test-preload-262726
	I0524 19:18:53.547896   98716 main.go:141] libmachine: (test-preload-262726) Calling .GetSSHUsername
	I0524 19:18:53.547974   98716 main.go:141] libmachine: (test-preload-262726) Calling .GetSSHPort
	I0524 19:18:53.548046   98716 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16573-71939/.minikube/machines/test-preload-262726/id_rsa Username:docker}
	I0524 19:18:53.548123   98716 main.go:141] libmachine: (test-preload-262726) Calling .GetSSHKeyPath
	I0524 19:18:53.548249   98716 main.go:141] libmachine: (test-preload-262726) Calling .GetSSHUsername
	I0524 19:18:53.548382   98716 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16573-71939/.minikube/machines/test-preload-262726/id_rsa Username:docker}
	I0524 19:18:53.684389   98716 ssh_runner.go:195] Run: systemctl --version
	I0524 19:18:53.689668   98716 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0524 19:18:53.694680   98716 cni.go:208] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0524 19:18:53.694730   98716 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0524 19:18:53.709284   98716 cni.go:261] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0524 19:18:53.709300   98716 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime containerd
	I0524 19:18:53.709420   98716 ssh_runner.go:195] Run: sudo crictl images --output json
	I0524 19:18:57.742348   98716 ssh_runner.go:235] Completed: sudo crictl images --output json: (4.03290368s)
	I0524 19:18:57.742472   98716 containerd.go:600] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0524 19:18:57.742612   98716 ssh_runner.go:195] Run: which lz4
	I0524 19:18:57.746729   98716 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0524 19:18:57.750604   98716 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0524 19:18:57.750627   98716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16573-71939/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (458696921 bytes)
	I0524 19:18:59.372315   98716 containerd.go:547] Took 1.625617 seconds to copy over tarball
	I0524 19:18:59.372389   98716 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0524 19:19:02.385241   98716 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.012822185s)
	I0524 19:19:02.385271   98716 containerd.go:554] Took 3.012929 seconds to extract the tarball
	I0524 19:19:02.385284   98716 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0524 19:19:02.424067   98716 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0524 19:19:02.529108   98716 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0524 19:19:02.547814   98716 start.go:481] detecting cgroup driver to use...
	I0524 19:19:02.547878   98716 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0524 19:19:05.252817   98716 ssh_runner.go:235] Completed: sudo systemctl stop -f crio: (2.704911709s)
	I0524 19:19:05.252887   98716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0524 19:19:05.265480   98716 docker.go:193] disabling cri-docker service (if available) ...
	I0524 19:19:05.265529   98716 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0524 19:19:05.278930   98716 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0524 19:19:05.293499   98716 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0524 19:19:05.395811   98716 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0524 19:19:05.495400   98716 docker.go:209] disabling docker service ...
	I0524 19:19:05.495454   98716 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0524 19:19:05.507728   98716 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0524 19:19:05.518146   98716 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0524 19:19:05.612172   98716 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0524 19:19:05.716557   98716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0524 19:19:05.727684   98716 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0524 19:19:05.745291   98716 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0524 19:19:05.754299   98716 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0524 19:19:05.762964   98716 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0524 19:19:05.763010   98716 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0524 19:19:05.771759   98716 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0524 19:19:05.780491   98716 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0524 19:19:05.789120   98716 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0524 19:19:05.797799   98716 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0524 19:19:05.806673   98716 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0524 19:19:05.815488   98716 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0524 19:19:05.823645   98716 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0524 19:19:05.823687   98716 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0524 19:19:05.835702   98716 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0524 19:19:05.843504   98716 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0524 19:19:05.935697   98716 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0524 19:19:05.966686   98716 start.go:528] Will wait 60s for socket path /run/containerd/containerd.sock
	I0524 19:19:05.966740   98716 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0524 19:19:05.973161   98716 retry.go:31] will retry after 861.302406ms: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I0524 19:19:06.835221   98716 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0524 19:19:06.840643   98716 start.go:549] Will wait 60s for crictl version
	I0524 19:19:06.840694   98716 ssh_runner.go:195] Run: which crictl
	I0524 19:19:06.844272   98716 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0524 19:19:06.872918   98716 start.go:565] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.1
	RuntimeApiVersion:  v1alpha2
	I0524 19:19:06.872977   98716 ssh_runner.go:195] Run: containerd --version
	I0524 19:19:06.900106   98716 ssh_runner.go:195] Run: containerd --version
	I0524 19:19:06.929162   98716 out.go:177] * Preparing Kubernetes v1.24.4 on containerd 1.7.1 ...
	I0524 19:19:06.930962   98716 main.go:141] libmachine: (test-preload-262726) Calling .GetIP
	I0524 19:19:06.933493   98716 main.go:141] libmachine: (test-preload-262726) DBG | domain test-preload-262726 has defined MAC address 52:54:00:2c:ce:d5 in network mk-test-preload-262726
	I0524 19:19:06.933851   98716 main.go:141] libmachine: (test-preload-262726) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:ce:d5", ip: ""} in network mk-test-preload-262726: {Iface:virbr1 ExpiryTime:2023-05-24 20:18:47 +0000 UTC Type:0 Mac:52:54:00:2c:ce:d5 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:test-preload-262726 Clientid:01:52:54:00:2c:ce:d5}
	I0524 19:19:06.933877   98716 main.go:141] libmachine: (test-preload-262726) DBG | domain test-preload-262726 has defined IP address 192.168.39.12 and MAC address 52:54:00:2c:ce:d5 in network mk-test-preload-262726
	I0524 19:19:06.934069   98716 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0524 19:19:06.937728   98716 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0524 19:19:06.949942   98716 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime containerd
	I0524 19:19:06.950018   98716 ssh_runner.go:195] Run: sudo crictl images --output json
	I0524 19:19:06.983631   98716 containerd.go:600] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0524 19:19:06.983682   98716 ssh_runner.go:195] Run: which lz4
	I0524 19:19:06.987261   98716 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0524 19:19:06.990993   98716 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0524 19:19:06.991015   98716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16573-71939/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (458696921 bytes)
	I0524 19:19:08.698638   98716 containerd.go:547] Took 1.711399 seconds to copy over tarball
	I0524 19:19:08.698705   98716 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0524 19:19:11.658664   98716 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.959924019s)
	I0524 19:19:11.658704   98716 containerd.go:554] Took 2.960039 seconds to extract the tarball
	I0524 19:19:11.658716   98716 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0524 19:19:11.698408   98716 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0524 19:19:11.799282   98716 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0524 19:19:11.825315   98716 ssh_runner.go:195] Run: sudo crictl images --output json
	I0524 19:19:12.881184   98716 ssh_runner.go:235] Completed: sudo crictl images --output json: (1.05582159s)
	I0524 19:19:12.881318   98716 containerd.go:600] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0524 19:19:12.881334   98716 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.24.4 registry.k8s.io/kube-controller-manager:v1.24.4 registry.k8s.io/kube-scheduler:v1.24.4 registry.k8s.io/kube-proxy:v1.24.4 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0524 19:19:12.881436   98716 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0524 19:19:12.881473   98716 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0524 19:19:12.881506   98716 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0524 19:19:12.881519   98716 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0524 19:19:12.881561   98716 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0524 19:19:12.881647   98716 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0524 19:19:12.881747   98716 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0524 19:19:12.881761   98716 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0524 19:19:12.882749   98716 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0524 19:19:12.882772   98716 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0524 19:19:12.882748   98716 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0524 19:19:12.882751   98716 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0524 19:19:12.882827   98716 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0524 19:19:12.882813   98716 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0524 19:19:12.882861   98716 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0524 19:19:12.882886   98716 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0524 19:19:13.025844   98716 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/pause:3.7"
	I0524 19:19:13.045120   98716 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/coredns/coredns:v1.8.6"
	I0524 19:19:13.067797   98716 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep gcr.io/k8s-minikube/storage-provisioner:v5"
	I0524 19:19:13.072020   98716 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/etcd:3.5.3-0"
	I0524 19:19:13.073189   98716 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/kube-apiserver:v1.24.4"
	I0524 19:19:13.080694   98716 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/kube-controller-manager:v1.24.4"
	I0524 19:19:13.085467   98716 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/kube-scheduler:v1.24.4"
	I0524 19:19:13.089256   98716 ssh_runner.go:195] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/kube-proxy:v1.24.4"
	I0524 19:19:13.603571   98716 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
	I0524 19:19:13.603624   98716 cri.go:217] Removing image: registry.k8s.io/pause:3.7
	I0524 19:19:13.603705   98716 ssh_runner.go:195] Run: which crictl
	I0524 19:19:13.623893   98716 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I0524 19:19:13.623938   98716 cri.go:217] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0524 19:19:13.623982   98716 ssh_runner.go:195] Run: which crictl
	I0524 19:19:14.057874   98716 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0524 19:19:14.057921   98716 cri.go:217] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0524 19:19:14.057970   98716 ssh_runner.go:195] Run: which crictl
	I0524 19:19:14.180242   98716 ssh_runner.go:235] Completed: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/etcd:3.5.3-0": (1.108197523s)
	I0524 19:19:14.180280   98716 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I0524 19:19:14.180302   98716 cri.go:217] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0524 19:19:14.180342   98716 ssh_runner.go:195] Run: which crictl
	I0524 19:19:14.180366   98716 ssh_runner.go:235] Completed: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/kube-apiserver:v1.24.4": (1.107150363s)
	I0524 19:19:14.180423   98716 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.4" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.4" does not exist at hash "6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d" in container runtime
	I0524 19:19:14.180459   98716 cri.go:217] Removing image: registry.k8s.io/kube-apiserver:v1.24.4
	I0524 19:19:14.180504   98716 ssh_runner.go:195] Run: which crictl
	I0524 19:19:14.252373   98716 ssh_runner.go:235] Completed: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/kube-controller-manager:v1.24.4": (1.171655245s)
	I0524 19:19:14.252412   98716 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.4" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.4" does not exist at hash "1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48" in container runtime
	I0524 19:19:14.252432   98716 ssh_runner.go:235] Completed: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/kube-proxy:v1.24.4": (1.163156617s)
	I0524 19:19:14.252443   98716 cri.go:217] Removing image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0524 19:19:14.252461   98716 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.4" needs transfer: "registry.k8s.io/kube-proxy:v1.24.4" does not exist at hash "7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7" in container runtime
	I0524 19:19:14.252481   98716 cri.go:217] Removing image: registry.k8s.io/kube-proxy:v1.24.4
	I0524 19:19:14.252484   98716 ssh_runner.go:195] Run: which crictl
	I0524 19:19:14.252501   98716 ssh_runner.go:235] Completed: /bin/bash -c "sudo ctr -n=k8s.io images check | grep registry.k8s.io/kube-scheduler:v1.24.4": (1.166999274s)
	I0524 19:19:14.252543   98716 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.4" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.4" does not exist at hash "03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9" in container runtime
	I0524 19:19:14.252577   98716 cri.go:217] Removing image: registry.k8s.io/kube-scheduler:v1.24.4
	I0524 19:19:14.252581   98716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0524 19:19:14.252506   98716 ssh_runner.go:195] Run: which crictl
	I0524 19:19:14.252609   98716 ssh_runner.go:195] Run: which crictl
	I0524 19:19:14.252790   98716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0524 19:19:14.252822   98716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0524 19:19:14.252894   98716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0524 19:19:14.252993   98716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0524 19:19:14.266711   98716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0524 19:19:14.266730   98716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0524 19:19:14.354112   98716 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0524 19:19:14.354180   98716 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16573-71939/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7
	I0524 19:19:14.354286   98716 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7
	I0524 19:19:14.410683   98716 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16573-71939/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0524 19:19:14.410752   98716 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16573-71939/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6
	I0524 19:19:14.410807   98716 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0524 19:19:14.410835   98716 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16573-71939/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0
	I0524 19:19:14.410897   98716 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0
	I0524 19:19:14.410840   98716 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6
	I0524 19:19:14.410935   98716 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16573-71939/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4
	I0524 19:19:14.411012   98716 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0524 19:19:14.411017   98716 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16573-71939/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0524 19:19:14.411060   98716 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0524 19:19:14.415045   98716 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16573-71939/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4
	I0524 19:19:14.415107   98716 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0524 19:19:14.429235   98716 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/16573-71939/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4
	I0524 19:19:14.429298   98716 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.24.4
	I0524 19:19:14.429326   98716 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.7 (exists)
	I0524 19:19:14.429342   98716 containerd.go:269] Loading image: /var/lib/minikube/images/pause_3.7
	I0524 19:19:14.429383   98716 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/pause_3.7
	I0524 19:19:14.429392   98716 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0524 19:19:14.429461   98716 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.24.4 (exists)
	I0524 19:19:14.429513   98716 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.8.6 (exists)
	I0524 19:19:14.429554   98716 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.3-0 (exists)
	I0524 19:19:14.429580   98716 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.24.4 (exists)
	I0524 19:19:14.431574   98716 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.24.4 (exists)
	I0524 19:19:14.434933   98716 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.24.4 (exists)
	I0524 19:19:14.544028   98716 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16573-71939/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7 from cache
	I0524 19:19:14.544073   98716 containerd.go:269] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0524 19:19:14.544130   98716 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
	I0524 19:19:15.117549   98716 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16573-71939/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0524 19:19:15.117588   98716 containerd.go:269] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0524 19:19:15.117650   98716 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0524 19:19:15.729357   98716 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16573-71939/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4 from cache
	I0524 19:19:15.729396   98716 containerd.go:269] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0524 19:19:15.729484   98716 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.8.6
	I0524 19:19:16.084092   98716 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16573-71939/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0524 19:19:16.084137   98716 containerd.go:269] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0524 19:19:16.084209   98716 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.5.3-0
	I0524 19:19:17.790841   98716 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.5.3-0: (1.706603502s)
	I0524 19:19:17.790868   98716 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16573-71939/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0524 19:19:17.790919   98716 containerd.go:269] Loading image: /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0524 19:19:17.790973   98716 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0524 19:19:18.415328   98716 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16573-71939/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4 from cache
	I0524 19:19:18.415365   98716 containerd.go:269] Loading image: /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0524 19:19:18.415413   98716 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0524 19:19:18.789128   98716 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16573-71939/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4 from cache
	I0524 19:19:18.789181   98716 containerd.go:269] Loading image: /var/lib/minikube/images/kube-proxy_v1.24.4
	I0524 19:19:18.789253   98716 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.24.4
	I0524 19:19:19.487694   98716 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/16573-71939/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4 from cache
	I0524 19:19:19.487743   98716 cache_images.go:123] Successfully loaded all cached images
	I0524 19:19:19.487750   98716 cache_images.go:92] LoadImages completed in 6.606406867s
	I0524 19:19:19.487809   98716 ssh_runner.go:195] Run: sudo crictl info
	I0524 19:19:19.516845   98716 cni.go:84] Creating CNI manager for ""
	I0524 19:19:19.516864   98716 cni.go:145] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0524 19:19:19.516887   98716 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0524 19:19:19.516914   98716 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.12 APIServerPort:8443 KubernetesVersion:v1.24.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-262726 NodeName:test-preload-262726 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.12"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.12 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0524 19:19:19.517090   98716 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.12
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "test-preload-262726"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.12
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.12"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0524 19:19:19.517187   98716 kubeadm.go:971] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=test-preload-262726 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.12
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.4 ClusterName:test-preload-262726 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0524 19:19:19.517255   98716 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.4
	I0524 19:19:19.526645   98716 binaries.go:44] Found k8s binaries, skipping transfer
	I0524 19:19:19.526713   98716 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0524 19:19:19.535819   98716 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (392 bytes)
	I0524 19:19:19.551040   98716 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0524 19:19:19.565363   98716 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2111 bytes)
	I0524 19:19:19.580240   98716 ssh_runner.go:195] Run: grep 192.168.39.12	control-plane.minikube.internal$ /etc/hosts
	I0524 19:19:19.583574   98716 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.12	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0524 19:19:19.594021   98716 certs.go:56] Setting up /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/test-preload-262726 for IP: 192.168.39.12
	I0524 19:19:19.594047   98716 certs.go:190] acquiring lock for shared ca certs: {Name:mk2a3f0918ca1ce5e8a6fdf9e7f174b68f929bde Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 19:19:19.594179   98716 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16573-71939/.minikube/ca.key
	I0524 19:19:19.594214   98716 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16573-71939/.minikube/proxy-client-ca.key
	I0524 19:19:19.594325   98716 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/test-preload-262726/client.key
	I0524 19:19:19.594383   98716 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/test-preload-262726/apiserver.key.6b589965
	I0524 19:19:19.594429   98716 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/test-preload-262726/proxy-client.key
	I0524 19:19:19.594540   98716 certs.go:437] found cert: /home/jenkins/minikube-integration/16573-71939/.minikube/certs/home/jenkins/minikube-integration/16573-71939/.minikube/certs/79153.pem (1338 bytes)
	W0524 19:19:19.594566   98716 certs.go:433] ignoring /home/jenkins/minikube-integration/16573-71939/.minikube/certs/home/jenkins/minikube-integration/16573-71939/.minikube/certs/79153_empty.pem, impossibly tiny 0 bytes
	I0524 19:19:19.594577   98716 certs.go:437] found cert: /home/jenkins/minikube-integration/16573-71939/.minikube/certs/home/jenkins/minikube-integration/16573-71939/.minikube/certs/ca-key.pem (1675 bytes)
	I0524 19:19:19.594600   98716 certs.go:437] found cert: /home/jenkins/minikube-integration/16573-71939/.minikube/certs/home/jenkins/minikube-integration/16573-71939/.minikube/certs/ca.pem (1078 bytes)
	I0524 19:19:19.594624   98716 certs.go:437] found cert: /home/jenkins/minikube-integration/16573-71939/.minikube/certs/home/jenkins/minikube-integration/16573-71939/.minikube/certs/cert.pem (1123 bytes)
	I0524 19:19:19.594646   98716 certs.go:437] found cert: /home/jenkins/minikube-integration/16573-71939/.minikube/certs/home/jenkins/minikube-integration/16573-71939/.minikube/certs/key.pem (1679 bytes)
	I0524 19:19:19.594682   98716 certs.go:437] found cert: /home/jenkins/minikube-integration/16573-71939/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16573-71939/.minikube/files/etc/ssl/certs/791532.pem (1708 bytes)
	I0524 19:19:19.595319   98716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/test-preload-262726/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0524 19:19:19.617452   98716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/test-preload-262726/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0524 19:19:19.640282   98716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/test-preload-262726/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0524 19:19:19.662076   98716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/test-preload-262726/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0524 19:19:19.682763   98716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16573-71939/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0524 19:19:19.703266   98716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16573-71939/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0524 19:19:19.724673   98716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16573-71939/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0524 19:19:19.746052   98716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16573-71939/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0524 19:19:19.767686   98716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16573-71939/.minikube/files/etc/ssl/certs/791532.pem --> /usr/share/ca-certificates/791532.pem (1708 bytes)
	I0524 19:19:19.789391   98716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16573-71939/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0524 19:19:19.810737   98716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16573-71939/.minikube/certs/79153.pem --> /usr/share/ca-certificates/79153.pem (1338 bytes)
	I0524 19:19:19.832180   98716 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0524 19:19:19.847961   98716 ssh_runner.go:195] Run: openssl version
	I0524 19:19:19.853621   98716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0524 19:19:19.864345   98716 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0524 19:19:19.868793   98716 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 May 24 18:37 /usr/share/ca-certificates/minikubeCA.pem
	I0524 19:19:19.868834   98716 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0524 19:19:19.874165   98716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0524 19:19:19.884305   98716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/79153.pem && ln -fs /usr/share/ca-certificates/79153.pem /etc/ssl/certs/79153.pem"
	I0524 19:19:19.894216   98716 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/79153.pem
	I0524 19:19:19.898501   98716 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 May 24 18:42 /usr/share/ca-certificates/79153.pem
	I0524 19:19:19.898546   98716 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/79153.pem
	I0524 19:19:19.903884   98716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/79153.pem /etc/ssl/certs/51391683.0"
	I0524 19:19:19.913585   98716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/791532.pem && ln -fs /usr/share/ca-certificates/791532.pem /etc/ssl/certs/791532.pem"
	I0524 19:19:19.923527   98716 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/791532.pem
	I0524 19:19:19.927840   98716 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 May 24 18:42 /usr/share/ca-certificates/791532.pem
	I0524 19:19:19.927879   98716 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/791532.pem
	I0524 19:19:19.933202   98716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/791532.pem /etc/ssl/certs/3ec20f2e.0"
	I0524 19:19:19.943003   98716 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0524 19:19:19.947239   98716 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0524 19:19:19.953025   98716 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0524 19:19:19.958105   98716 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0524 19:19:19.963277   98716 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0524 19:19:19.968700   98716 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0524 19:19:19.974073   98716 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0524 19:19:19.979227   98716 kubeadm.go:404] StartCluster: {Name:test-preload-262726 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16501/minikube-v1.30.1-1684536668-16501-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684536746-16501@sha256:f5d93abf1d1cfb142a7cf0b58b24029595d621e5f943105b16c61199094d77de Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVers
ion:v1.24.4 ClusterName:test-preload-262726 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.12 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PV
ersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0524 19:19:19.979335   98716 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0524 19:19:19.979403   98716 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0524 19:19:20.015236   98716 cri.go:88] found id: ""
	I0524 19:19:20.015296   98716 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0524 19:19:20.024701   98716 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0524 19:19:20.024723   98716 kubeadm.go:636] restartCluster start
	I0524 19:19:20.024767   98716 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0524 19:19:20.033565   98716 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0524 19:19:20.034007   98716 kubeconfig.go:135] verify returned: extract IP: "test-preload-262726" does not appear in /home/jenkins/minikube-integration/16573-71939/kubeconfig
	I0524 19:19:20.034139   98716 kubeconfig.go:146] "test-preload-262726" context is missing from /home/jenkins/minikube-integration/16573-71939/kubeconfig - will repair!
	I0524 19:19:20.034447   98716 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16573-71939/kubeconfig: {Name:mkca58267e892de3526cb65d43d387d65171cc36 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 19:19:20.035132   98716 kapi.go:59] client config for test-preload-262726: &rest.Config{Host:"https://192.168.39.12:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16573-71939/.minikube/profiles/test-preload-262726/client.crt", KeyFile:"/home/jenkins/minikube-integration/16573-71939/.minikube/profiles/test-preload-262726/client.key", CAFile:"/home/jenkins/minikube-integration/16573-71939/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(n
il), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19b9380), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0524 19:19:20.035999   98716 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0524 19:19:20.044793   98716 api_server.go:166] Checking apiserver status ...
	I0524 19:19:20.044834   98716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0524 19:19:20.055664   98716 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0524 19:19:20.556353   98716 api_server.go:166] Checking apiserver status ...
	I0524 19:19:20.556419   98716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0524 19:19:20.568045   98716 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0524 19:19:21.056681   98716 api_server.go:166] Checking apiserver status ...
	I0524 19:19:21.056772   98716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0524 19:19:21.069278   98716 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0524 19:19:21.555853   98716 api_server.go:166] Checking apiserver status ...
	I0524 19:19:21.555970   98716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0524 19:19:21.568752   98716 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0524 19:19:22.056331   98716 api_server.go:166] Checking apiserver status ...
	I0524 19:19:22.056406   98716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0524 19:19:22.068085   98716 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0524 19:19:22.556756   98716 api_server.go:166] Checking apiserver status ...
	I0524 19:19:22.556817   98716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0524 19:19:22.568129   98716 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0524 19:19:23.056052   98716 api_server.go:166] Checking apiserver status ...
	I0524 19:19:23.056114   98716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0524 19:19:23.067406   98716 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0524 19:19:23.556057   98716 api_server.go:166] Checking apiserver status ...
	I0524 19:19:23.556157   98716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0524 19:19:23.567724   98716 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0524 19:19:24.056523   98716 api_server.go:166] Checking apiserver status ...
	I0524 19:19:24.056611   98716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0524 19:19:24.068114   98716 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0524 19:19:24.555759   98716 api_server.go:166] Checking apiserver status ...
	I0524 19:19:24.555836   98716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0524 19:19:24.567084   98716 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0524 19:19:25.056797   98716 api_server.go:166] Checking apiserver status ...
	I0524 19:19:25.056894   98716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0524 19:19:25.068735   98716 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0524 19:19:25.556710   98716 api_server.go:166] Checking apiserver status ...
	I0524 19:19:25.556797   98716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0524 19:19:25.568294   98716 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0524 19:19:26.055865   98716 api_server.go:166] Checking apiserver status ...
	I0524 19:19:26.055955   98716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0524 19:19:26.067763   98716 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0524 19:19:26.556312   98716 api_server.go:166] Checking apiserver status ...
	I0524 19:19:26.556383   98716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0524 19:19:26.567798   98716 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0524 19:19:27.056395   98716 api_server.go:166] Checking apiserver status ...
	I0524 19:19:27.056488   98716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0524 19:19:27.068396   98716 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0524 19:19:27.555905   98716 api_server.go:166] Checking apiserver status ...
	I0524 19:19:27.556029   98716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0524 19:19:27.567809   98716 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0524 19:19:28.056559   98716 api_server.go:166] Checking apiserver status ...
	I0524 19:19:28.056631   98716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0524 19:19:28.068009   98716 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0524 19:19:28.556670   98716 api_server.go:166] Checking apiserver status ...
	I0524 19:19:28.556756   98716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0524 19:19:28.568312   98716 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0524 19:19:29.055927   98716 api_server.go:166] Checking apiserver status ...
	I0524 19:19:29.056100   98716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0524 19:19:29.067625   98716 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0524 19:19:29.556365   98716 api_server.go:166] Checking apiserver status ...
	I0524 19:19:29.556441   98716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0524 19:19:29.568980   98716 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0524 19:19:30.045076   98716 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0524 19:19:30.045107   98716 kubeadm.go:1123] stopping kube-system containers ...
	I0524 19:19:30.045121   98716 cri.go:53] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0524 19:19:30.045201   98716 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0524 19:19:30.074736   98716 cri.go:88] found id: ""
	I0524 19:19:30.074798   98716 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0524 19:19:30.090262   98716 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0524 19:19:30.099367   98716 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0524 19:19:30.099414   98716 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0524 19:19:30.108182   98716 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0524 19:19:30.108211   98716 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0524 19:19:30.204505   98716 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0524 19:19:30.687335   98716 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0524 19:19:31.030674   98716 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0524 19:19:31.093028   98716 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0524 19:19:31.171001   98716 api_server.go:52] waiting for apiserver process to appear ...
	I0524 19:19:31.171092   98716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0524 19:19:31.685197   98716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0524 19:19:32.185226   98716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0524 19:19:32.685385   98716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0524 19:19:33.185475   98716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0524 19:19:33.217487   98716 api_server.go:72] duration metric: took 2.046486567s to wait for apiserver process to appear ...
	I0524 19:19:33.217511   98716 api_server.go:88] waiting for apiserver healthz status ...
	I0524 19:19:33.217530   98716 api_server.go:253] Checking apiserver healthz at https://192.168.39.12:8443/healthz ...
	I0524 19:19:33.217946   98716 api_server.go:269] stopped: https://192.168.39.12:8443/healthz: Get "https://192.168.39.12:8443/healthz": dial tcp 192.168.39.12:8443: connect: connection refused
	I0524 19:19:33.718589   98716 api_server.go:253] Checking apiserver healthz at https://192.168.39.12:8443/healthz ...
	I0524 19:19:33.719145   98716 api_server.go:269] stopped: https://192.168.39.12:8443/healthz: Get "https://192.168.39.12:8443/healthz": dial tcp 192.168.39.12:8443: connect: connection refused
	I0524 19:19:34.218496   98716 api_server.go:253] Checking apiserver healthz at https://192.168.39.12:8443/healthz ...
	I0524 19:19:37.184754   98716 api_server.go:279] https://192.168.39.12:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0524 19:19:37.184788   98716 api_server.go:103] status: https://192.168.39.12:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0524 19:19:37.184803   98716 api_server.go:253] Checking apiserver healthz at https://192.168.39.12:8443/healthz ...
	I0524 19:19:37.217459   98716 api_server.go:279] https://192.168.39.12:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0524 19:19:37.217501   98716 api_server.go:103] status: https://192.168.39.12:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0524 19:19:37.218534   98716 api_server.go:253] Checking apiserver healthz at https://192.168.39.12:8443/healthz ...
	I0524 19:19:37.247324   98716 api_server.go:279] https://192.168.39.12:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0524 19:19:37.247351   98716 api_server.go:103] status: https://192.168.39.12:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0524 19:19:37.719107   98716 api_server.go:253] Checking apiserver healthz at https://192.168.39.12:8443/healthz ...
	I0524 19:19:37.724328   98716 api_server.go:279] https://192.168.39.12:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0524 19:19:37.724358   98716 api_server.go:103] status: https://192.168.39.12:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0524 19:19:38.219071   98716 api_server.go:253] Checking apiserver healthz at https://192.168.39.12:8443/healthz ...
	I0524 19:19:38.230626   98716 api_server.go:279] https://192.168.39.12:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0524 19:19:38.230657   98716 api_server.go:103] status: https://192.168.39.12:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0524 19:19:38.718183   98716 api_server.go:253] Checking apiserver healthz at https://192.168.39.12:8443/healthz ...
	I0524 19:19:38.725084   98716 api_server.go:279] https://192.168.39.12:8443/healthz returned 200:
	ok
	I0524 19:19:38.733586   98716 api_server.go:141] control plane version: v1.24.4
	I0524 19:19:38.733607   98716 api_server.go:131] duration metric: took 5.516088251s to wait for apiserver health ...
	I0524 19:19:38.733619   98716 cni.go:84] Creating CNI manager for ""
	I0524 19:19:38.733626   98716 cni.go:145] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0524 19:19:38.735429   98716 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0524 19:19:38.736884   98716 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0524 19:19:38.750275   98716 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0524 19:19:38.792895   98716 system_pods.go:43] waiting for kube-system pods to appear ...
	I0524 19:19:38.803215   98716 system_pods.go:59] 7 kube-system pods found
	I0524 19:19:38.803257   98716 system_pods.go:61] "coredns-6d4b75cb6d-dl4xp" [e6216ab8-7bef-475e-a7bb-fab7bf2404f3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0524 19:19:38.803273   98716 system_pods.go:61] "etcd-test-preload-262726" [6abf0ad3-e222-42ce-a289-5ec1c18e5a8e] Running
	I0524 19:19:38.803282   98716 system_pods.go:61] "kube-apiserver-test-preload-262726" [8c7daa0e-36a3-4b7e-b034-888b3e0173b8] Running
	I0524 19:19:38.803291   98716 system_pods.go:61] "kube-controller-manager-test-preload-262726" [de8cc020-34e6-42cd-8770-fdd4c1f5504d] Running
	I0524 19:19:38.803303   98716 system_pods.go:61] "kube-proxy-fdclm" [f135ae50-d2c1-4d46-9045-52bf968f5291] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0524 19:19:38.803318   98716 system_pods.go:61] "kube-scheduler-test-preload-262726" [5e394445-b631-46f2-9739-16bbd1d958ac] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0524 19:19:38.803331   98716 system_pods.go:61] "storage-provisioner" [19459c64-6c74-4bee-89aa-4db2436a469f] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0524 19:19:38.803340   98716 system_pods.go:74] duration metric: took 10.424818ms to wait for pod list to return data ...
	I0524 19:19:38.803353   98716 node_conditions.go:102] verifying NodePressure condition ...
	I0524 19:19:38.807401   98716 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0524 19:19:38.807431   98716 node_conditions.go:123] node cpu capacity is 2
	I0524 19:19:38.807441   98716 node_conditions.go:105] duration metric: took 4.080321ms to run NodePressure ...
	I0524 19:19:38.807457   98716 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0524 19:19:39.036398   98716 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0524 19:19:39.044153   98716 kubeadm.go:787] kubelet initialised
	I0524 19:19:39.044183   98716 kubeadm.go:788] duration metric: took 7.757444ms waiting for restarted kubelet to initialise ...
	I0524 19:19:39.044194   98716 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0524 19:19:39.050375   98716 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-dl4xp" in "kube-system" namespace to be "Ready" ...
	I0524 19:19:39.056854   98716 pod_ready.go:97] node "test-preload-262726" hosting pod "coredns-6d4b75cb6d-dl4xp" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-262726" has status "Ready":"False"
	I0524 19:19:39.056883   98716 pod_ready.go:81] duration metric: took 6.483427ms waiting for pod "coredns-6d4b75cb6d-dl4xp" in "kube-system" namespace to be "Ready" ...
	E0524 19:19:39.056900   98716 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-262726" hosting pod "coredns-6d4b75cb6d-dl4xp" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-262726" has status "Ready":"False"
	I0524 19:19:39.056917   98716 pod_ready.go:78] waiting up to 4m0s for pod "etcd-test-preload-262726" in "kube-system" namespace to be "Ready" ...
	I0524 19:19:39.061342   98716 pod_ready.go:97] node "test-preload-262726" hosting pod "etcd-test-preload-262726" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-262726" has status "Ready":"False"
	I0524 19:19:39.061382   98716 pod_ready.go:81] duration metric: took 4.454845ms waiting for pod "etcd-test-preload-262726" in "kube-system" namespace to be "Ready" ...
	E0524 19:19:39.061391   98716 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-262726" hosting pod "etcd-test-preload-262726" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-262726" has status "Ready":"False"
	I0524 19:19:39.061401   98716 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-test-preload-262726" in "kube-system" namespace to be "Ready" ...
	I0524 19:19:39.065864   98716 pod_ready.go:97] node "test-preload-262726" hosting pod "kube-apiserver-test-preload-262726" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-262726" has status "Ready":"False"
	I0524 19:19:39.065893   98716 pod_ready.go:81] duration metric: took 4.48328ms waiting for pod "kube-apiserver-test-preload-262726" in "kube-system" namespace to be "Ready" ...
	E0524 19:19:39.065903   98716 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-262726" hosting pod "kube-apiserver-test-preload-262726" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-262726" has status "Ready":"False"
	I0524 19:19:39.065912   98716 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-test-preload-262726" in "kube-system" namespace to be "Ready" ...
	I0524 19:19:39.198922   98716 pod_ready.go:97] node "test-preload-262726" hosting pod "kube-controller-manager-test-preload-262726" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-262726" has status "Ready":"False"
	I0524 19:19:39.198954   98716 pod_ready.go:81] duration metric: took 133.026489ms waiting for pod "kube-controller-manager-test-preload-262726" in "kube-system" namespace to be "Ready" ...
	E0524 19:19:39.198966   98716 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-262726" hosting pod "kube-controller-manager-test-preload-262726" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-262726" has status "Ready":"False"
	I0524 19:19:39.198978   98716 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-fdclm" in "kube-system" namespace to be "Ready" ...
	I0524 19:19:39.595976   98716 pod_ready.go:97] node "test-preload-262726" hosting pod "kube-proxy-fdclm" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-262726" has status "Ready":"False"
	I0524 19:19:39.596010   98716 pod_ready.go:81] duration metric: took 397.023508ms waiting for pod "kube-proxy-fdclm" in "kube-system" namespace to be "Ready" ...
	E0524 19:19:39.596036   98716 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-262726" hosting pod "kube-proxy-fdclm" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-262726" has status "Ready":"False"
	I0524 19:19:39.596050   98716 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-test-preload-262726" in "kube-system" namespace to be "Ready" ...
	I0524 19:19:39.997580   98716 pod_ready.go:97] node "test-preload-262726" hosting pod "kube-scheduler-test-preload-262726" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-262726" has status "Ready":"False"
	I0524 19:19:39.997637   98716 pod_ready.go:81] duration metric: took 401.571361ms waiting for pod "kube-scheduler-test-preload-262726" in "kube-system" namespace to be "Ready" ...
	E0524 19:19:39.997650   98716 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-262726" hosting pod "kube-scheduler-test-preload-262726" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-262726" has status "Ready":"False"
	I0524 19:19:39.997662   98716 pod_ready.go:38] duration metric: took 953.453981ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0524 19:19:39.997700   98716 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0524 19:19:40.010093   98716 ops.go:34] apiserver oom_adj: -16
	I0524 19:19:40.010116   98716 kubeadm.go:640] restartCluster took 19.985385643s
	I0524 19:19:40.010126   98716 kubeadm.go:406] StartCluster complete in 20.030905076s
	I0524 19:19:40.010146   98716 settings.go:142] acquiring lock: {Name:mk242f143a6a02c2ddba85b6f580593271dad784 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 19:19:40.010229   98716 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16573-71939/kubeconfig
	I0524 19:19:40.010836   98716 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16573-71939/kubeconfig: {Name:mkca58267e892de3526cb65d43d387d65171cc36 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 19:19:40.011099   98716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0524 19:19:40.011229   98716 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0524 19:19:40.011304   98716 addons.go:66] Setting storage-provisioner=true in profile "test-preload-262726"
	I0524 19:19:40.011321   98716 addons.go:66] Setting default-storageclass=true in profile "test-preload-262726"
	I0524 19:19:40.011348   98716 addons.go:228] Setting addon storage-provisioner=true in "test-preload-262726"
	W0524 19:19:40.011362   98716 addons.go:237] addon storage-provisioner should already be in state true
	I0524 19:19:40.011369   98716 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-262726"
	I0524 19:19:40.011413   98716 host.go:66] Checking if "test-preload-262726" exists ...
	I0524 19:19:40.011331   98716 config.go:182] Loaded profile config "test-preload-262726": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.24.4
	I0524 19:19:40.011620   98716 kapi.go:59] client config for test-preload-262726: &rest.Config{Host:"https://192.168.39.12:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16573-71939/.minikube/profiles/test-preload-262726/client.crt", KeyFile:"/home/jenkins/minikube-integration/16573-71939/.minikube/profiles/test-preload-262726/client.key", CAFile:"/home/jenkins/minikube-integration/16573-71939/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(n
il), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19b9380), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0524 19:19:40.011780   98716 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0524 19:19:40.011804   98716 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0524 19:19:40.011827   98716 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0524 19:19:40.011840   98716 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0524 19:19:40.016367   98716 kapi.go:248] "coredns" deployment in "kube-system" namespace and "test-preload-262726" context rescaled to 1 replicas
	I0524 19:19:40.016401   98716 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.12 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0524 19:19:40.018817   98716 out.go:177] * Verifying Kubernetes components...
	I0524 19:19:40.020610   98716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0524 19:19:40.026396   98716 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38619
	I0524 19:19:40.026473   98716 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41089
	I0524 19:19:40.026796   98716 main.go:141] libmachine: () Calling .GetVersion
	I0524 19:19:40.027052   98716 main.go:141] libmachine: () Calling .GetVersion
	I0524 19:19:40.027353   98716 main.go:141] libmachine: Using API Version  1
	I0524 19:19:40.027379   98716 main.go:141] libmachine: () Calling .SetConfigRaw
	I0524 19:19:40.027532   98716 main.go:141] libmachine: Using API Version  1
	I0524 19:19:40.027564   98716 main.go:141] libmachine: () Calling .SetConfigRaw
	I0524 19:19:40.027673   98716 main.go:141] libmachine: () Calling .GetMachineName
	I0524 19:19:40.027877   98716 main.go:141] libmachine: (test-preload-262726) Calling .GetState
	I0524 19:19:40.027913   98716 main.go:141] libmachine: () Calling .GetMachineName
	I0524 19:19:40.028508   98716 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0524 19:19:40.028578   98716 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0524 19:19:40.030482   98716 kapi.go:59] client config for test-preload-262726: &rest.Config{Host:"https://192.168.39.12:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16573-71939/.minikube/profiles/test-preload-262726/client.crt", KeyFile:"/home/jenkins/minikube-integration/16573-71939/.minikube/profiles/test-preload-262726/client.key", CAFile:"/home/jenkins/minikube-integration/16573-71939/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(n
il), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x19b9380), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0524 19:19:40.039596   98716 addons.go:228] Setting addon default-storageclass=true in "test-preload-262726"
	W0524 19:19:40.039619   98716 addons.go:237] addon default-storageclass should already be in state true
	I0524 19:19:40.039643   98716 host.go:66] Checking if "test-preload-262726" exists ...
	I0524 19:19:40.040035   98716 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0524 19:19:40.040090   98716 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0524 19:19:40.043815   98716 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41789
	I0524 19:19:40.044224   98716 main.go:141] libmachine: () Calling .GetVersion
	I0524 19:19:40.044748   98716 main.go:141] libmachine: Using API Version  1
	I0524 19:19:40.044783   98716 main.go:141] libmachine: () Calling .SetConfigRaw
	I0524 19:19:40.045100   98716 main.go:141] libmachine: () Calling .GetMachineName
	I0524 19:19:40.045311   98716 main.go:141] libmachine: (test-preload-262726) Calling .GetState
	I0524 19:19:40.047036   98716 main.go:141] libmachine: (test-preload-262726) Calling .DriverName
	I0524 19:19:40.049308   98716 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0524 19:19:40.051080   98716 addons.go:420] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0524 19:19:40.051100   98716 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0524 19:19:40.051118   98716 main.go:141] libmachine: (test-preload-262726) Calling .GetSSHHostname
	I0524 19:19:40.054545   98716 main.go:141] libmachine: (test-preload-262726) DBG | domain test-preload-262726 has defined MAC address 52:54:00:2c:ce:d5 in network mk-test-preload-262726
	I0524 19:19:40.055011   98716 main.go:141] libmachine: (test-preload-262726) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:ce:d5", ip: ""} in network mk-test-preload-262726: {Iface:virbr1 ExpiryTime:2023-05-24 20:18:47 +0000 UTC Type:0 Mac:52:54:00:2c:ce:d5 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:test-preload-262726 Clientid:01:52:54:00:2c:ce:d5}
	I0524 19:19:40.055051   98716 main.go:141] libmachine: (test-preload-262726) DBG | domain test-preload-262726 has defined IP address 192.168.39.12 and MAC address 52:54:00:2c:ce:d5 in network mk-test-preload-262726
	I0524 19:19:40.055239   98716 main.go:141] libmachine: (test-preload-262726) Calling .GetSSHPort
	I0524 19:19:40.055423   98716 main.go:141] libmachine: (test-preload-262726) Calling .GetSSHKeyPath
	I0524 19:19:40.055630   98716 main.go:141] libmachine: (test-preload-262726) Calling .GetSSHUsername
	I0524 19:19:40.055674   98716 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44987
	I0524 19:19:40.055789   98716 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16573-71939/.minikube/machines/test-preload-262726/id_rsa Username:docker}
	I0524 19:19:40.056033   98716 main.go:141] libmachine: () Calling .GetVersion
	I0524 19:19:40.056480   98716 main.go:141] libmachine: Using API Version  1
	I0524 19:19:40.056501   98716 main.go:141] libmachine: () Calling .SetConfigRaw
	I0524 19:19:40.056825   98716 main.go:141] libmachine: () Calling .GetMachineName
	I0524 19:19:40.057255   98716 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0524 19:19:40.057293   98716 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0524 19:19:40.071690   98716 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33787
	I0524 19:19:40.072020   98716 main.go:141] libmachine: () Calling .GetVersion
	I0524 19:19:40.072458   98716 main.go:141] libmachine: Using API Version  1
	I0524 19:19:40.072481   98716 main.go:141] libmachine: () Calling .SetConfigRaw
	I0524 19:19:40.072807   98716 main.go:141] libmachine: () Calling .GetMachineName
	I0524 19:19:40.073010   98716 main.go:141] libmachine: (test-preload-262726) Calling .GetState
	I0524 19:19:40.074702   98716 main.go:141] libmachine: (test-preload-262726) Calling .DriverName
	I0524 19:19:40.074949   98716 addons.go:420] installing /etc/kubernetes/addons/storageclass.yaml
	I0524 19:19:40.074965   98716 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0524 19:19:40.074987   98716 main.go:141] libmachine: (test-preload-262726) Calling .GetSSHHostname
	I0524 19:19:40.078268   98716 main.go:141] libmachine: (test-preload-262726) DBG | domain test-preload-262726 has defined MAC address 52:54:00:2c:ce:d5 in network mk-test-preload-262726
	I0524 19:19:40.078298   98716 main.go:141] libmachine: (test-preload-262726) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:ce:d5", ip: ""} in network mk-test-preload-262726: {Iface:virbr1 ExpiryTime:2023-05-24 20:18:47 +0000 UTC Type:0 Mac:52:54:00:2c:ce:d5 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:test-preload-262726 Clientid:01:52:54:00:2c:ce:d5}
	I0524 19:19:40.078329   98716 main.go:141] libmachine: (test-preload-262726) DBG | domain test-preload-262726 has defined IP address 192.168.39.12 and MAC address 52:54:00:2c:ce:d5 in network mk-test-preload-262726
	I0524 19:19:40.078510   98716 main.go:141] libmachine: (test-preload-262726) Calling .GetSSHPort
	I0524 19:19:40.078692   98716 main.go:141] libmachine: (test-preload-262726) Calling .GetSSHKeyPath
	I0524 19:19:40.078864   98716 main.go:141] libmachine: (test-preload-262726) Calling .GetSSHUsername
	I0524 19:19:40.079097   98716 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16573-71939/.minikube/machines/test-preload-262726/id_rsa Username:docker}
	I0524 19:19:40.186328   98716 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0524 19:19:40.212428   98716 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0524 19:19:40.214579   98716 node_ready.go:35] waiting up to 6m0s for node "test-preload-262726" to be "Ready" ...
	I0524 19:19:40.214707   98716 start.go:889] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0524 19:19:41.073998   98716 main.go:141] libmachine: Making call to close driver server
	I0524 19:19:41.074026   98716 main.go:141] libmachine: (test-preload-262726) Calling .Close
	I0524 19:19:41.074153   98716 main.go:141] libmachine: Making call to close driver server
	I0524 19:19:41.074183   98716 main.go:141] libmachine: (test-preload-262726) Calling .Close
	I0524 19:19:41.074327   98716 main.go:141] libmachine: (test-preload-262726) DBG | Closing plugin on server side
	I0524 19:19:41.074399   98716 main.go:141] libmachine: Successfully made call to close driver server
	I0524 19:19:41.074418   98716 main.go:141] libmachine: Making call to close connection to plugin binary
	I0524 19:19:41.074433   98716 main.go:141] libmachine: Making call to close driver server
	I0524 19:19:41.074446   98716 main.go:141] libmachine: (test-preload-262726) Calling .Close
	I0524 19:19:41.074474   98716 main.go:141] libmachine: (test-preload-262726) DBG | Closing plugin on server side
	I0524 19:19:41.074484   98716 main.go:141] libmachine: Successfully made call to close driver server
	I0524 19:19:41.074502   98716 main.go:141] libmachine: Making call to close connection to plugin binary
	I0524 19:19:41.074525   98716 main.go:141] libmachine: Making call to close driver server
	I0524 19:19:41.074541   98716 main.go:141] libmachine: (test-preload-262726) Calling .Close
	I0524 19:19:41.074672   98716 main.go:141] libmachine: Successfully made call to close driver server
	I0524 19:19:41.074689   98716 main.go:141] libmachine: Making call to close connection to plugin binary
	I0524 19:19:41.074702   98716 main.go:141] libmachine: Making call to close driver server
	I0524 19:19:41.074711   98716 main.go:141] libmachine: (test-preload-262726) Calling .Close
	I0524 19:19:41.074810   98716 main.go:141] libmachine: (test-preload-262726) DBG | Closing plugin on server side
	I0524 19:19:41.074842   98716 main.go:141] libmachine: Successfully made call to close driver server
	I0524 19:19:41.074852   98716 main.go:141] libmachine: Making call to close connection to plugin binary
	I0524 19:19:41.074904   98716 main.go:141] libmachine: (test-preload-262726) DBG | Closing plugin on server side
	I0524 19:19:41.074943   98716 main.go:141] libmachine: Successfully made call to close driver server
	I0524 19:19:41.074963   98716 main.go:141] libmachine: Making call to close connection to plugin binary
	I0524 19:19:41.077473   98716 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0524 19:19:41.079154   98716 addons.go:499] enable addons completed in 1.067932333s: enabled=[storage-provisioner default-storageclass]
	I0524 19:19:42.220145   98716 node_ready.go:58] node "test-preload-262726" has status "Ready":"False"
	I0524 19:19:44.220895   98716 node_ready.go:58] node "test-preload-262726" has status "Ready":"False"
	I0524 19:19:46.222287   98716 node_ready.go:58] node "test-preload-262726" has status "Ready":"False"
	I0524 19:19:47.721822   98716 node_ready.go:49] node "test-preload-262726" has status "Ready":"True"
	I0524 19:19:47.721844   98716 node_ready.go:38] duration metric: took 7.507239286s waiting for node "test-preload-262726" to be "Ready" ...
	I0524 19:19:47.721852   98716 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0524 19:19:47.729253   98716 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6d4b75cb6d-dl4xp" in "kube-system" namespace to be "Ready" ...
	I0524 19:19:47.734682   98716 pod_ready.go:92] pod "coredns-6d4b75cb6d-dl4xp" in "kube-system" namespace has status "Ready":"True"
	I0524 19:19:47.734700   98716 pod_ready.go:81] duration metric: took 5.425367ms waiting for pod "coredns-6d4b75cb6d-dl4xp" in "kube-system" namespace to be "Ready" ...
	I0524 19:19:47.734707   98716 pod_ready.go:78] waiting up to 6m0s for pod "etcd-test-preload-262726" in "kube-system" namespace to be "Ready" ...
	I0524 19:19:48.744478   98716 pod_ready.go:92] pod "etcd-test-preload-262726" in "kube-system" namespace has status "Ready":"True"
	I0524 19:19:48.744504   98716 pod_ready.go:81] duration metric: took 1.009789802s waiting for pod "etcd-test-preload-262726" in "kube-system" namespace to be "Ready" ...
	I0524 19:19:48.744514   98716 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-test-preload-262726" in "kube-system" namespace to be "Ready" ...
	I0524 19:19:48.748898   98716 pod_ready.go:92] pod "kube-apiserver-test-preload-262726" in "kube-system" namespace has status "Ready":"True"
	I0524 19:19:48.748915   98716 pod_ready.go:81] duration metric: took 4.39372ms waiting for pod "kube-apiserver-test-preload-262726" in "kube-system" namespace to be "Ready" ...
	I0524 19:19:48.748927   98716 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-test-preload-262726" in "kube-system" namespace to be "Ready" ...
	I0524 19:19:49.264578   98716 pod_ready.go:92] pod "kube-controller-manager-test-preload-262726" in "kube-system" namespace has status "Ready":"True"
	I0524 19:19:49.264614   98716 pod_ready.go:81] duration metric: took 515.67186ms waiting for pod "kube-controller-manager-test-preload-262726" in "kube-system" namespace to be "Ready" ...
	I0524 19:19:49.264628   98716 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-fdclm" in "kube-system" namespace to be "Ready" ...
	I0524 19:19:49.321513   98716 pod_ready.go:92] pod "kube-proxy-fdclm" in "kube-system" namespace has status "Ready":"True"
	I0524 19:19:49.321530   98716 pod_ready.go:81] duration metric: took 56.896398ms waiting for pod "kube-proxy-fdclm" in "kube-system" namespace to be "Ready" ...
	I0524 19:19:49.321538   98716 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-test-preload-262726" in "kube-system" namespace to be "Ready" ...
	I0524 19:19:51.728864   98716 pod_ready.go:102] pod "kube-scheduler-test-preload-262726" in "kube-system" namespace has status "Ready":"False"
	I0524 19:19:52.726450   98716 pod_ready.go:92] pod "kube-scheduler-test-preload-262726" in "kube-system" namespace has status "Ready":"True"
	I0524 19:19:52.726474   98716 pod_ready.go:81] duration metric: took 3.404929577s waiting for pod "kube-scheduler-test-preload-262726" in "kube-system" namespace to be "Ready" ...
	I0524 19:19:52.726482   98716 pod_ready.go:38] duration metric: took 5.004621803s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0524 19:19:52.726498   98716 api_server.go:52] waiting for apiserver process to appear ...
	I0524 19:19:52.726556   98716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0524 19:19:52.739958   98716 api_server.go:72] duration metric: took 12.723528271s to wait for apiserver process to appear ...
	I0524 19:19:52.739977   98716 api_server.go:88] waiting for apiserver healthz status ...
	I0524 19:19:52.739992   98716 api_server.go:253] Checking apiserver healthz at https://192.168.39.12:8443/healthz ...
	I0524 19:19:52.744881   98716 api_server.go:279] https://192.168.39.12:8443/healthz returned 200:
	ok
	I0524 19:19:52.745704   98716 api_server.go:141] control plane version: v1.24.4
	I0524 19:19:52.745724   98716 api_server.go:131] duration metric: took 5.740658ms to wait for apiserver health ...
	I0524 19:19:52.745731   98716 system_pods.go:43] waiting for kube-system pods to appear ...
	I0524 19:19:52.751422   98716 system_pods.go:59] 7 kube-system pods found
	I0524 19:19:52.751441   98716 system_pods.go:61] "coredns-6d4b75cb6d-dl4xp" [e6216ab8-7bef-475e-a7bb-fab7bf2404f3] Running
	I0524 19:19:52.751446   98716 system_pods.go:61] "etcd-test-preload-262726" [6abf0ad3-e222-42ce-a289-5ec1c18e5a8e] Running
	I0524 19:19:52.751450   98716 system_pods.go:61] "kube-apiserver-test-preload-262726" [8c7daa0e-36a3-4b7e-b034-888b3e0173b8] Running
	I0524 19:19:52.751454   98716 system_pods.go:61] "kube-controller-manager-test-preload-262726" [de8cc020-34e6-42cd-8770-fdd4c1f5504d] Running
	I0524 19:19:52.751458   98716 system_pods.go:61] "kube-proxy-fdclm" [f135ae50-d2c1-4d46-9045-52bf968f5291] Running
	I0524 19:19:52.751464   98716 system_pods.go:61] "kube-scheduler-test-preload-262726" [5e394445-b631-46f2-9739-16bbd1d958ac] Running
	I0524 19:19:52.751468   98716 system_pods.go:61] "storage-provisioner" [19459c64-6c74-4bee-89aa-4db2436a469f] Running
	I0524 19:19:52.751473   98716 system_pods.go:74] duration metric: took 5.737408ms to wait for pod list to return data ...
	I0524 19:19:52.751479   98716 default_sa.go:34] waiting for default service account to be created ...
	I0524 19:19:52.753185   98716 default_sa.go:45] found service account: "default"
	I0524 19:19:52.753202   98716 default_sa.go:55] duration metric: took 1.716393ms for default service account to be created ...
	I0524 19:19:52.753210   98716 system_pods.go:116] waiting for k8s-apps to be running ...
	I0524 19:19:52.757409   98716 system_pods.go:86] 7 kube-system pods found
	I0524 19:19:52.757428   98716 system_pods.go:89] "coredns-6d4b75cb6d-dl4xp" [e6216ab8-7bef-475e-a7bb-fab7bf2404f3] Running
	I0524 19:19:52.757436   98716 system_pods.go:89] "etcd-test-preload-262726" [6abf0ad3-e222-42ce-a289-5ec1c18e5a8e] Running
	I0524 19:19:52.757442   98716 system_pods.go:89] "kube-apiserver-test-preload-262726" [8c7daa0e-36a3-4b7e-b034-888b3e0173b8] Running
	I0524 19:19:52.757448   98716 system_pods.go:89] "kube-controller-manager-test-preload-262726" [de8cc020-34e6-42cd-8770-fdd4c1f5504d] Running
	I0524 19:19:52.757453   98716 system_pods.go:89] "kube-proxy-fdclm" [f135ae50-d2c1-4d46-9045-52bf968f5291] Running
	I0524 19:19:52.757460   98716 system_pods.go:89] "kube-scheduler-test-preload-262726" [5e394445-b631-46f2-9739-16bbd1d958ac] Running
	I0524 19:19:52.757465   98716 system_pods.go:89] "storage-provisioner" [19459c64-6c74-4bee-89aa-4db2436a469f] Running
	I0524 19:19:52.757472   98716 system_pods.go:126] duration metric: took 4.257212ms to wait for k8s-apps to be running ...
	I0524 19:19:52.757482   98716 system_svc.go:44] waiting for kubelet service to be running ....
	I0524 19:19:52.757522   98716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0524 19:19:52.772046   98716 system_svc.go:56] duration metric: took 14.56114ms WaitForService to wait for kubelet.
	I0524 19:19:52.772062   98716 kubeadm.go:581] duration metric: took 12.75563501s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0524 19:19:52.772094   98716 node_conditions.go:102] verifying NodePressure condition ...
	I0524 19:19:52.921919   98716 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0524 19:19:52.921945   98716 node_conditions.go:123] node cpu capacity is 2
	I0524 19:19:52.921954   98716 node_conditions.go:105] duration metric: took 149.854657ms to run NodePressure ...
	I0524 19:19:52.921964   98716 start.go:228] waiting for startup goroutines ...
	I0524 19:19:52.921970   98716 start.go:233] waiting for cluster config update ...
	I0524 19:19:52.921979   98716 start.go:242] writing updated cluster config ...
	I0524 19:19:52.922225   98716 ssh_runner.go:195] Run: rm -f paused
	I0524 19:19:52.970018   98716 start.go:568] kubectl: 1.27.2, cluster: 1.24.4 (minor skew: 3)
	I0524 19:19:52.972074   98716 out.go:177] 
	W0524 19:19:52.973715   98716 out.go:239] ! /usr/local/bin/kubectl is version 1.27.2, which may have incompatibilities with Kubernetes 1.24.4.
	I0524 19:19:52.975183   98716 out.go:177]   - Want kubectl v1.24.4? Try 'minikube kubectl -- get pods -A'
	I0524 19:19:52.976679   98716 out.go:177] * Done! kubectl is now configured to use "test-preload-262726" cluster and "default" namespace by default
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	eb128cc2aa9b8       a4ca41631cc7a       7 seconds ago       Running             coredns                   1                   d61c04fc36dad
	523278fedeee7       6e38f40d628db       14 seconds ago      Running             storage-provisioner       1                   23220399aa9fd
	2d99e8fb77a0e       7a53d1e08ef58       14 seconds ago      Running             kube-proxy                1                   4e38a9d2fc98a
	29eed419b3c13       03fa22539fc1c       20 seconds ago      Running             kube-scheduler            1                   53b6a904ae609
	74fd7b94b3d16       aebe758cef4cd       21 seconds ago      Running             etcd                      1                   87a1cf416a684
	6775fb5cd8bfe       1f99cb6da9a82       21 seconds ago      Running             kube-controller-manager   1                   8875b1315c18d
	eaac0ac39a864       6cab9d1bed1be       21 seconds ago      Running             kube-apiserver            1                   8abb6ba3c904f
	
	* 
	* ==> containerd <==
	* -- Journal begins at Wed 2023-05-24 19:18:46 UTC, ends at Wed 2023-05-24 19:19:53 UTC. --
	May 24 19:19:39 test-preload-262726 containerd[689]: time="2023-05-24T19:19:39.072738227Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fdclm,Uid:f135ae50-d2c1-4d46-9045-52bf968f5291,Namespace:kube-system,Attempt:0,}"
	May 24 19:19:39 test-preload-262726 containerd[689]: time="2023-05-24T19:19:39.117100613Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 24 19:19:39 test-preload-262726 containerd[689]: time="2023-05-24T19:19:39.117176425Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 24 19:19:39 test-preload-262726 containerd[689]: time="2023-05-24T19:19:39.117190430Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 24 19:19:39 test-preload-262726 containerd[689]: time="2023-05-24T19:19:39.117202126Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 24 19:19:39 test-preload-262726 containerd[689]: time="2023-05-24T19:19:39.189400977Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fdclm,Uid:f135ae50-d2c1-4d46-9045-52bf968f5291,Namespace:kube-system,Attempt:0,} returns sandbox id \"4e38a9d2fc98a2e012390c9312aafe412574ea6c8f7aa72c786be600f3c76dc7\""
	May 24 19:19:39 test-preload-262726 containerd[689]: time="2023-05-24T19:19:39.192447705Z" level=info msg="CreateContainer within sandbox \"4e38a9d2fc98a2e012390c9312aafe412574ea6c8f7aa72c786be600f3c76dc7\" for container &ContainerMetadata{Name:kube-proxy,Attempt:1,}"
	May 24 19:19:39 test-preload-262726 containerd[689]: time="2023-05-24T19:19:39.276066374Z" level=info msg="CreateContainer within sandbox \"4e38a9d2fc98a2e012390c9312aafe412574ea6c8f7aa72c786be600f3c76dc7\" for &ContainerMetadata{Name:kube-proxy,Attempt:1,} returns container id \"2d99e8fb77a0ebbffef6c61791dcd1b02242d6827bdb40ec9a43b2504c38617d\""
	May 24 19:19:39 test-preload-262726 containerd[689]: time="2023-05-24T19:19:39.276490178Z" level=info msg="StartContainer for \"2d99e8fb77a0ebbffef6c61791dcd1b02242d6827bdb40ec9a43b2504c38617d\""
	May 24 19:19:39 test-preload-262726 containerd[689]: time="2023-05-24T19:19:39.525002226Z" level=info msg="StartContainer for \"2d99e8fb77a0ebbffef6c61791dcd1b02242d6827bdb40ec9a43b2504c38617d\" returns successfully"
	May 24 19:19:39 test-preload-262726 containerd[689]: time="2023-05-24T19:19:39.686791066Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:storage-provisioner,Uid:19459c64-6c74-4bee-89aa-4db2436a469f,Namespace:kube-system,Attempt:0,} returns sandbox id \"23220399aa9fda62e9a01f6ff2cc1278ab199048d5846c964d42272d64185618\""
	May 24 19:19:39 test-preload-262726 containerd[689]: time="2023-05-24T19:19:39.693673401Z" level=info msg="CreateContainer within sandbox \"23220399aa9fda62e9a01f6ff2cc1278ab199048d5846c964d42272d64185618\" for container &ContainerMetadata{Name:storage-provisioner,Attempt:1,}"
	May 24 19:19:39 test-preload-262726 containerd[689]: time="2023-05-24T19:19:39.723799600Z" level=info msg="CreateContainer within sandbox \"23220399aa9fda62e9a01f6ff2cc1278ab199048d5846c964d42272d64185618\" for &ContainerMetadata{Name:storage-provisioner,Attempt:1,} returns container id \"523278fedeee7b83a54fb109111c3dfd57a2e1c068a412dacca609e5a13090d9\""
	May 24 19:19:39 test-preload-262726 containerd[689]: time="2023-05-24T19:19:39.728652018Z" level=info msg="StartContainer for \"523278fedeee7b83a54fb109111c3dfd57a2e1c068a412dacca609e5a13090d9\""
	May 24 19:19:39 test-preload-262726 containerd[689]: time="2023-05-24T19:19:39.815369515Z" level=info msg="StartContainer for \"523278fedeee7b83a54fb109111c3dfd57a2e1c068a412dacca609e5a13090d9\" returns successfully"
	May 24 19:19:46 test-preload-262726 containerd[689]: time="2023-05-24T19:19:46.179857822Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6d4b75cb6d-dl4xp,Uid:e6216ab8-7bef-475e-a7bb-fab7bf2404f3,Namespace:kube-system,Attempt:0,}"
	May 24 19:19:46 test-preload-262726 containerd[689]: time="2023-05-24T19:19:46.296405835Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 24 19:19:46 test-preload-262726 containerd[689]: time="2023-05-24T19:19:46.296563390Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 24 19:19:46 test-preload-262726 containerd[689]: time="2023-05-24T19:19:46.297053879Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 24 19:19:46 test-preload-262726 containerd[689]: time="2023-05-24T19:19:46.297220098Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 24 19:19:46 test-preload-262726 containerd[689]: time="2023-05-24T19:19:46.617144821Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6d4b75cb6d-dl4xp,Uid:e6216ab8-7bef-475e-a7bb-fab7bf2404f3,Namespace:kube-system,Attempt:0,} returns sandbox id \"d61c04fc36dad41da1f3fd121a1e9a58b3303b81b637296d8748b73384294c15\""
	May 24 19:19:46 test-preload-262726 containerd[689]: time="2023-05-24T19:19:46.626349352Z" level=info msg="CreateContainer within sandbox \"d61c04fc36dad41da1f3fd121a1e9a58b3303b81b637296d8748b73384294c15\" for container &ContainerMetadata{Name:coredns,Attempt:1,}"
	May 24 19:19:46 test-preload-262726 containerd[689]: time="2023-05-24T19:19:46.657778040Z" level=info msg="CreateContainer within sandbox \"d61c04fc36dad41da1f3fd121a1e9a58b3303b81b637296d8748b73384294c15\" for &ContainerMetadata{Name:coredns,Attempt:1,} returns container id \"eb128cc2aa9b8ce0860feb86125448f00b7e048f303ab1fbeabe9cd91a3dbe66\""
	May 24 19:19:46 test-preload-262726 containerd[689]: time="2023-05-24T19:19:46.658449880Z" level=info msg="StartContainer for \"eb128cc2aa9b8ce0860feb86125448f00b7e048f303ab1fbeabe9cd91a3dbe66\""
	May 24 19:19:46 test-preload-262726 containerd[689]: time="2023-05-24T19:19:46.734080862Z" level=info msg="StartContainer for \"eb128cc2aa9b8ce0860feb86125448f00b7e048f303ab1fbeabe9cd91a3dbe66\" returns successfully"
	
	* 
	* ==> coredns [eb128cc2aa9b8ce0860feb86125448f00b7e048f303ab1fbeabe9cd91a3dbe66] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = bbeeddb09682f41960fef01b05cb3a3d
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] 127.0.0.1:35090 - 53711 "HINFO IN 2317895724519841220.6243386728523230990. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.026718903s
	
	* 
	* ==> describe nodes <==
	* Name:               test-preload-262726
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-262726
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d23ad66c17ded3bf1d7d6fb0fa0ee29881f9547e
	                    minikube.k8s.io/name=test-preload-262726
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_05_24T19_16_01_0700
	                    minikube.k8s.io/version=v1.30.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 24 May 2023 19:15:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-262726
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 24 May 2023 19:19:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 24 May 2023 19:19:47 +0000   Wed, 24 May 2023 19:15:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 24 May 2023 19:19:47 +0000   Wed, 24 May 2023 19:15:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 24 May 2023 19:19:47 +0000   Wed, 24 May 2023 19:15:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 24 May 2023 19:19:47 +0000   Wed, 24 May 2023 19:19:47 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.12
	  Hostname:    test-preload-262726
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 71fb17bf05d74f43b2840a45b1dbe120
	  System UUID:                71fb17bf-05d7-4f43-b284-0a45b1dbe120
	  Boot ID:                    46a523bd-8a48-41d6-abf4-2ff01283005e
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.1
	  Kubelet Version:            v1.24.4
	  Kube-Proxy Version:         v1.24.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-dl4xp                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     3m39s
	  kube-system                 etcd-test-preload-262726                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         3m53s
	  kube-system                 kube-apiserver-test-preload-262726             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m53s
	  kube-system                 kube-controller-manager-test-preload-262726    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m53s
	  kube-system                 kube-proxy-fdclm                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m39s
	  kube-system                 kube-scheduler-test-preload-262726             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m53s
	  kube-system                 storage-provisioner                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m37s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 14s                  kube-proxy       
	  Normal  Starting                 3m36s                kube-proxy       
	  Normal  NodeHasSufficientMemory  4m2s (x5 over 4m2s)  kubelet          Node test-preload-262726 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m2s (x4 over 4m2s)  kubelet          Node test-preload-262726 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m2s (x4 over 4m2s)  kubelet          Node test-preload-262726 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m53s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m53s                kubelet          Node test-preload-262726 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m53s                kubelet          Node test-preload-262726 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m53s                kubelet          Node test-preload-262726 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m53s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                3m43s                kubelet          Node test-preload-262726 status is now: NodeReady
	  Normal  RegisteredNode           3m40s                node-controller  Node test-preload-262726 event: Registered Node test-preload-262726 in Controller
	  Normal  Starting                 23s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  23s (x8 over 23s)    kubelet          Node test-preload-262726 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23s (x8 over 23s)    kubelet          Node test-preload-262726 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23s (x7 over 23s)    kubelet          Node test-preload-262726 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  23s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5s                   node-controller  Node test-preload-262726 event: Registered Node test-preload-262726 in Controller
	
	* 
	* ==> dmesg <==
	* [May24 19:18] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.070534] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +3.948715] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.321789] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.143964] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.605264] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[May24 19:19] systemd-fstab-generator[528]: Ignoring "noauto" for root device
	[  +2.867542] systemd-fstab-generator[556]: Ignoring "noauto" for root device
	[  +0.103604] systemd-fstab-generator[567]: Ignoring "noauto" for root device
	[  +0.116088] systemd-fstab-generator[580]: Ignoring "noauto" for root device
	[  +0.096673] systemd-fstab-generator[591]: Ignoring "noauto" for root device
	[  +0.225058] systemd-fstab-generator[619]: Ignoring "noauto" for root device
	[  +5.860960] systemd-fstab-generator[680]: Ignoring "noauto" for root device
	[ +19.217176] systemd-fstab-generator[1023]: Ignoring "noauto" for root device
	[  +8.869960] kauditd_printk_skb: 7 callbacks suppressed
	[  +6.419096] kauditd_printk_skb: 8 callbacks suppressed
	
	* 
	* ==> etcd [74fd7b94b3d161546000746309dad04a8441fc4b1e0c77eed524691371150bac] <==
	* {"level":"info","ts":"2023-05-24T19:19:33.456Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"ab0e927fe14112bb","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2023-05-24T19:19:33.459Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2023-05-24T19:19:33.462Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ab0e927fe14112bb switched to configuration voters=(12325950308097266363)"}
	{"level":"info","ts":"2023-05-24T19:19:33.462Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"5f0195cf24a31222","local-member-id":"ab0e927fe14112bb","added-peer-id":"ab0e927fe14112bb","added-peer-peer-urls":["https://192.168.39.12:2380"]}
	{"level":"info","ts":"2023-05-24T19:19:33.462Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"5f0195cf24a31222","local-member-id":"ab0e927fe14112bb","cluster-version":"3.5"}
	{"level":"info","ts":"2023-05-24T19:19:33.462Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-05-24T19:19:33.464Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-05-24T19:19:33.465Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"ab0e927fe14112bb","initial-advertise-peer-urls":["https://192.168.39.12:2380"],"listen-peer-urls":["https://192.168.39.12:2380"],"advertise-client-urls":["https://192.168.39.12:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.12:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-05-24T19:19:33.465Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-05-24T19:19:33.465Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.12:2380"}
	{"level":"info","ts":"2023-05-24T19:19:33.465Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.12:2380"}
	{"level":"info","ts":"2023-05-24T19:19:34.527Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ab0e927fe14112bb is starting a new election at term 2"}
	{"level":"info","ts":"2023-05-24T19:19:34.527Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ab0e927fe14112bb became pre-candidate at term 2"}
	{"level":"info","ts":"2023-05-24T19:19:34.528Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ab0e927fe14112bb received MsgPreVoteResp from ab0e927fe14112bb at term 2"}
	{"level":"info","ts":"2023-05-24T19:19:34.528Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ab0e927fe14112bb became candidate at term 3"}
	{"level":"info","ts":"2023-05-24T19:19:34.528Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ab0e927fe14112bb received MsgVoteResp from ab0e927fe14112bb at term 3"}
	{"level":"info","ts":"2023-05-24T19:19:34.528Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ab0e927fe14112bb became leader at term 3"}
	{"level":"info","ts":"2023-05-24T19:19:34.528Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ab0e927fe14112bb elected leader ab0e927fe14112bb at term 3"}
	{"level":"info","ts":"2023-05-24T19:19:34.528Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"ab0e927fe14112bb","local-member-attributes":"{Name:test-preload-262726 ClientURLs:[https://192.168.39.12:2379]}","request-path":"/0/members/ab0e927fe14112bb/attributes","cluster-id":"5f0195cf24a31222","publish-timeout":"7s"}
	{"level":"info","ts":"2023-05-24T19:19:34.529Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-05-24T19:19:34.529Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-05-24T19:19:34.531Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-05-24T19:19:34.533Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.12:2379"}
	{"level":"info","ts":"2023-05-24T19:19:34.543Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-05-24T19:19:34.543Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	* 
	* ==> kernel <==
	*  19:19:54 up 1 min,  0 users,  load average: 1.25, 0.39, 0.14
	Linux test-preload-262726 5.10.57 #1 SMP Sat May 20 03:22:25 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [eaac0ac39a864752ab541e0b5eb1fec135493b9db624a5d5d1226d653a68a2f4] <==
	* I0524 19:19:37.162095       1 establishing_controller.go:76] Starting EstablishingController
	I0524 19:19:37.162141       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0524 19:19:37.162155       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0524 19:19:37.162198       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0524 19:19:37.163448       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0524 19:19:37.163558       1 shared_informer.go:255] Waiting for caches to sync for crd-autoregister
	I0524 19:19:37.250988       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0524 19:19:37.254173       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0524 19:19:37.263819       1 shared_informer.go:262] Caches are synced for crd-autoregister
	E0524 19:19:37.268428       1 controller.go:169] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I0524 19:19:37.303770       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0524 19:19:37.329475       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0524 19:19:37.332119       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0524 19:19:37.332378       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0524 19:19:37.332133       1 cache.go:39] Caches are synced for autoregister controller
	I0524 19:19:37.765332       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0524 19:19:38.142791       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0524 19:19:38.941379       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0524 19:19:38.951899       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0524 19:19:38.993821       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0524 19:19:39.014108       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0524 19:19:39.021378       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0524 19:19:39.829929       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0524 19:19:49.959412       1 controller.go:611] quota admission added evaluator for: endpoints
	I0524 19:19:49.976687       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	* 
	* ==> kube-controller-manager [6775fb5cd8bfe62b27eab28d4469d6e0cd1e09ee65ad65b8d533bf1080ea6e76] <==
	* I0524 19:19:49.913236       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	W0524 19:19:49.913336       1 node_lifecycle_controller.go:1014] Missing timestamp for Node test-preload-262726. Assuming now as a timestamp.
	I0524 19:19:49.913503       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I0524 19:19:49.913762       1 shared_informer.go:262] Caches are synced for TTL
	I0524 19:19:49.914003       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0524 19:19:49.914401       1 event.go:294] "Event occurred" object="test-preload-262726" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node test-preload-262726 event: Registered Node test-preload-262726 in Controller"
	I0524 19:19:49.917763       1 shared_informer.go:262] Caches are synced for job
	I0524 19:19:49.922903       1 shared_informer.go:262] Caches are synced for PV protection
	I0524 19:19:49.946387       1 shared_informer.go:262] Caches are synced for endpoint
	I0524 19:19:49.962003       1 shared_informer.go:262] Caches are synced for HPA
	I0524 19:19:49.965496       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I0524 19:19:49.970757       1 shared_informer.go:262] Caches are synced for GC
	I0524 19:19:49.975824       1 shared_informer.go:262] Caches are synced for expand
	I0524 19:19:49.981377       1 shared_informer.go:262] Caches are synced for PVC protection
	I0524 19:19:50.012777       1 shared_informer.go:262] Caches are synced for ephemeral
	I0524 19:19:50.021454       1 shared_informer.go:262] Caches are synced for persistent volume
	I0524 19:19:50.033156       1 shared_informer.go:262] Caches are synced for attach detach
	I0524 19:19:50.081131       1 shared_informer.go:262] Caches are synced for resource quota
	I0524 19:19:50.089554       1 shared_informer.go:262] Caches are synced for stateful set
	I0524 19:19:50.094022       1 shared_informer.go:262] Caches are synced for disruption
	I0524 19:19:50.094055       1 disruption.go:371] Sending events to api server.
	I0524 19:19:50.126016       1 shared_informer.go:262] Caches are synced for resource quota
	I0524 19:19:50.548255       1 shared_informer.go:262] Caches are synced for garbage collector
	I0524 19:19:50.548408       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0524 19:19:50.577828       1 shared_informer.go:262] Caches are synced for garbage collector
	
	* 
	* ==> kube-proxy [2d99e8fb77a0ebbffef6c61791dcd1b02242d6827bdb40ec9a43b2504c38617d] <==
	* I0524 19:19:39.730687       1 node.go:163] Successfully retrieved node IP: 192.168.39.12
	I0524 19:19:39.730749       1 server_others.go:138] "Detected node IP" address="192.168.39.12"
	I0524 19:19:39.730779       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0524 19:19:39.810902       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0524 19:19:39.810964       1 server_others.go:206] "Using iptables Proxier"
	I0524 19:19:39.813002       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0524 19:19:39.817936       1 server.go:661] "Version info" version="v1.24.4"
	I0524 19:19:39.817976       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0524 19:19:39.822060       1 config.go:226] "Starting endpoint slice config controller"
	I0524 19:19:39.822412       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0524 19:19:39.822441       1 config.go:317] "Starting service config controller"
	I0524 19:19:39.822447       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0524 19:19:39.824890       1 config.go:444] "Starting node config controller"
	I0524 19:19:39.824898       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0524 19:19:39.923535       1 shared_informer.go:262] Caches are synced for service config
	I0524 19:19:39.923898       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0524 19:19:39.925637       1 shared_informer.go:262] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [29eed419b3c13e6e2bf515c9caadf1be7cfdc191696e0ad2353dd6fa4f9ece9c] <==
	* I0524 19:19:34.744263       1 serving.go:348] Generated self-signed cert in-memory
	W0524 19:19:37.187292       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0524 19:19:37.187584       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0524 19:19:37.187998       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0524 19:19:37.188182       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0524 19:19:37.252032       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.4"
	I0524 19:19:37.252080       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0524 19:19:37.253794       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0524 19:19:37.257802       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0524 19:19:37.258058       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0524 19:19:37.258237       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0524 19:19:37.359371       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Wed 2023-05-24 19:18:46 UTC, ends at Wed 2023-05-24 19:19:54 UTC. --
	May 24 19:19:37 test-preload-262726 kubelet[1029]: I0524 19:19:37.305006    1029 setters.go:532] "Node became not ready" node="test-preload-262726" condition={Type:Ready Status:False LastHeartbeatTime:2023-05-24 19:19:37.304764733 +0000 UTC m=+6.289326905 LastTransitionTime:2023-05-24 19:19:37.304764733 +0000 UTC m=+6.289326905 Reason:KubeletNotReady Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized}
	May 24 19:19:37 test-preload-262726 kubelet[1029]: E0524 19:19:37.335933    1029 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	May 24 19:19:38 test-preload-262726 kubelet[1029]: I0524 19:19:38.160479    1029 apiserver.go:52] "Watching apiserver"
	May 24 19:19:38 test-preload-262726 kubelet[1029]: I0524 19:19:38.163926    1029 topology_manager.go:200] "Topology Admit Handler"
	May 24 19:19:38 test-preload-262726 kubelet[1029]: I0524 19:19:38.164081    1029 topology_manager.go:200] "Topology Admit Handler"
	May 24 19:19:38 test-preload-262726 kubelet[1029]: I0524 19:19:38.164157    1029 topology_manager.go:200] "Topology Admit Handler"
	May 24 19:19:38 test-preload-262726 kubelet[1029]: E0524 19:19:38.165895    1029 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-6d4b75cb6d-dl4xp" podUID=e6216ab8-7bef-475e-a7bb-fab7bf2404f3
	May 24 19:19:38 test-preload-262726 kubelet[1029]: I0524 19:19:38.234510    1029 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84m27\" (UniqueName: \"kubernetes.io/projected/e6216ab8-7bef-475e-a7bb-fab7bf2404f3-kube-api-access-84m27\") pod \"coredns-6d4b75cb6d-dl4xp\" (UID: \"e6216ab8-7bef-475e-a7bb-fab7bf2404f3\") " pod="kube-system/coredns-6d4b75cb6d-dl4xp"
	May 24 19:19:38 test-preload-262726 kubelet[1029]: I0524 19:19:38.234543    1029 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f135ae50-d2c1-4d46-9045-52bf968f5291-lib-modules\") pod \"kube-proxy-fdclm\" (UID: \"f135ae50-d2c1-4d46-9045-52bf968f5291\") " pod="kube-system/kube-proxy-fdclm"
	May 24 19:19:38 test-preload-262726 kubelet[1029]: I0524 19:19:38.234565    1029 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmhkr\" (UniqueName: \"kubernetes.io/projected/19459c64-6c74-4bee-89aa-4db2436a469f-kube-api-access-tmhkr\") pod \"storage-provisioner\" (UID: \"19459c64-6c74-4bee-89aa-4db2436a469f\") " pod="kube-system/storage-provisioner"
	May 24 19:19:38 test-preload-262726 kubelet[1029]: I0524 19:19:38.234585    1029 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e6216ab8-7bef-475e-a7bb-fab7bf2404f3-config-volume\") pod \"coredns-6d4b75cb6d-dl4xp\" (UID: \"e6216ab8-7bef-475e-a7bb-fab7bf2404f3\") " pod="kube-system/coredns-6d4b75cb6d-dl4xp"
	May 24 19:19:38 test-preload-262726 kubelet[1029]: I0524 19:19:38.234662    1029 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f135ae50-d2c1-4d46-9045-52bf968f5291-kube-proxy\") pod \"kube-proxy-fdclm\" (UID: \"f135ae50-d2c1-4d46-9045-52bf968f5291\") " pod="kube-system/kube-proxy-fdclm"
	May 24 19:19:38 test-preload-262726 kubelet[1029]: I0524 19:19:38.234682    1029 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f135ae50-d2c1-4d46-9045-52bf968f5291-xtables-lock\") pod \"kube-proxy-fdclm\" (UID: \"f135ae50-d2c1-4d46-9045-52bf968f5291\") " pod="kube-system/kube-proxy-fdclm"
	May 24 19:19:38 test-preload-262726 kubelet[1029]: I0524 19:19:38.234700    1029 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l69pk\" (UniqueName: \"kubernetes.io/projected/f135ae50-d2c1-4d46-9045-52bf968f5291-kube-api-access-l69pk\") pod \"kube-proxy-fdclm\" (UID: \"f135ae50-d2c1-4d46-9045-52bf968f5291\") " pod="kube-system/kube-proxy-fdclm"
	May 24 19:19:38 test-preload-262726 kubelet[1029]: I0524 19:19:38.234720    1029 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/19459c64-6c74-4bee-89aa-4db2436a469f-tmp\") pod \"storage-provisioner\" (UID: \"19459c64-6c74-4bee-89aa-4db2436a469f\") " pod="kube-system/storage-provisioner"
	May 24 19:19:38 test-preload-262726 kubelet[1029]: I0524 19:19:38.234728    1029 reconciler.go:159] "Reconciler: start to sync state"
	May 24 19:19:38 test-preload-262726 kubelet[1029]: E0524 19:19:38.336196    1029 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	May 24 19:19:38 test-preload-262726 kubelet[1029]: E0524 19:19:38.336293    1029 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/e6216ab8-7bef-475e-a7bb-fab7bf2404f3-config-volume podName:e6216ab8-7bef-475e-a7bb-fab7bf2404f3 nodeName:}" failed. No retries permitted until 2023-05-24 19:19:38.836273418 +0000 UTC m=+7.820835599 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/e6216ab8-7bef-475e-a7bb-fab7bf2404f3-config-volume") pod "coredns-6d4b75cb6d-dl4xp" (UID: "e6216ab8-7bef-475e-a7bb-fab7bf2404f3") : object "kube-system"/"coredns" not registered
	May 24 19:19:38 test-preload-262726 kubelet[1029]: E0524 19:19:38.839996    1029 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	May 24 19:19:38 test-preload-262726 kubelet[1029]: E0524 19:19:38.840075    1029 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/e6216ab8-7bef-475e-a7bb-fab7bf2404f3-config-volume podName:e6216ab8-7bef-475e-a7bb-fab7bf2404f3 nodeName:}" failed. No retries permitted until 2023-05-24 19:19:39.840060409 +0000 UTC m=+8.824622585 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/e6216ab8-7bef-475e-a7bb-fab7bf2404f3-config-volume") pod "coredns-6d4b75cb6d-dl4xp" (UID: "e6216ab8-7bef-475e-a7bb-fab7bf2404f3") : object "kube-system"/"coredns" not registered
	May 24 19:19:39 test-preload-262726 kubelet[1029]: E0524 19:19:39.848252    1029 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	May 24 19:19:39 test-preload-262726 kubelet[1029]: E0524 19:19:39.848356    1029 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/e6216ab8-7bef-475e-a7bb-fab7bf2404f3-config-volume podName:e6216ab8-7bef-475e-a7bb-fab7bf2404f3 nodeName:}" failed. No retries permitted until 2023-05-24 19:19:41.848333901 +0000 UTC m=+10.832896074 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/e6216ab8-7bef-475e-a7bb-fab7bf2404f3-config-volume") pod "coredns-6d4b75cb6d-dl4xp" (UID: "e6216ab8-7bef-475e-a7bb-fab7bf2404f3") : object "kube-system"/"coredns" not registered
	May 24 19:19:40 test-preload-262726 kubelet[1029]: E0524 19:19:40.268308    1029 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-6d4b75cb6d-dl4xp" podUID=e6216ab8-7bef-475e-a7bb-fab7bf2404f3
	May 24 19:19:41 test-preload-262726 kubelet[1029]: E0524 19:19:41.862953    1029 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	May 24 19:19:41 test-preload-262726 kubelet[1029]: E0524 19:19:41.863399    1029 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/e6216ab8-7bef-475e-a7bb-fab7bf2404f3-config-volume podName:e6216ab8-7bef-475e-a7bb-fab7bf2404f3 nodeName:}" failed. No retries permitted until 2023-05-24 19:19:45.863378995 +0000 UTC m=+14.847941156 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/e6216ab8-7bef-475e-a7bb-fab7bf2404f3-config-volume") pod "coredns-6d4b75cb6d-dl4xp" (UID: "e6216ab8-7bef-475e-a7bb-fab7bf2404f3") : object "kube-system"/"coredns" not registered
	
	* 
	* ==> storage-provisioner [523278fedeee7b83a54fb109111c3dfd57a2e1c068a412dacca609e5a13090d9] <==
	* I0524 19:19:39.877154       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-262726 -n test-preload-262726
helpers_test.go:261: (dbg) Run:  kubectl --context test-preload-262726 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-262726" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-262726
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-262726: (1.246236545s)
--- FAIL: TestPreload (297.25s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (1916.35s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:132: (dbg) Run:  /tmp/minikube-v1.22.0.2758915908.exe start -p running-upgrade-134012 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd
E0524 19:23:36.094559   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/functional-420572/client.crt: no such file or directory
version_upgrade_test.go:132: (dbg) Non-zero exit: /tmp/minikube-v1.22.0.2758915908.exe start -p running-upgrade-134012 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd: exit status 109 (15m16.261327848s)

                                                
                                                
-- stdout --
	* [running-upgrade-134012] minikube v1.22.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=16573
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16573-71939/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	  - KUBECONFIG=/tmp/legacy_kubeconfig2506259060
	* Using the kvm2 driver based on user configuration
	* Downloading VM boot image ...
	* minikube 1.30.1 is available! Download it: https://github.com/kubernetes/minikube/releases/tag/v1.30.1
	* To disable this notice, run: 'minikube config set WantUpdateNotification false'
	
	* Starting control plane node running-upgrade-134012 in cluster running-upgrade-134012
	* Downloading Kubernetes v1.21.2 preload ...
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.21.2 on containerd 1.4.4 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	    > minikube-v1.22.0.iso.sha256: 65 B / 65 B [-------------] 100.00% ? p/s 0s    > minikube-v1.22.0.iso: 17.93 MiB / 242.95 MiB [>___________] 7.38% ? p/s ?    > minikube-v1.22.0.iso: 59.31 MiB / 242.95 MiB [-->________] 24.41% ? p/s ?    > minikube-v1.22.0.iso: 88.80 MiB / 242.95 MiB [---->______] 36.55% ? p/s ?    > minikube-v1.22.0.iso: 115.08 MiB / 242.95 MiB  47.37% 161.92 MiB p/s ETA     > minikube-v1.22.0.iso: 144.00 MiB / 242.95 MiB  59.27% 161.92 MiB p/s ETA     > minikube-v1.22.0.iso: 184.00 MiB / 242.95 MiB  75.73% 161.92 MiB p/s ETA     > minikube-v1.22.0.iso: 226.48 MiB / 242.95 MiB  93.22% 163.45 MiB p/s ETA     > minikube-v1.22.0.iso: 242.95 MiB / 242.95 MiB  100.00% 187.82 MiB p/s 1.5    > preloaded-images-k8s-v11-v1...: 10.83 MiB / 922.45 MiB [>_] 1.17% ? p/s ?    > preloaded-images-k8s-v11-v1...: 40.00 MiB / 922.45 MiB [>_] 4.34% ? p/s ?    > preloaded-images-k8s-v11-v1...: 72.00 MiB / 922.45 MiB [>_] 7.81% ? p/s ?    > preloaded-images-k8s-v11-v1...: 104.00 MiB / 922.45 MiB  11.2
7% 155.19 Mi    > preloaded-images-k8s-v11-v1...: 136.00 MiB / 922.45 MiB  14.74% 155.19 Mi    > preloaded-images-k8s-v11-v1...: 165.34 MiB / 922.45 MiB  17.92% 155.19 Mi    > preloaded-images-k8s-v11-v1...: 190.68 MiB / 922.45 MiB  20.67% 154.50 Mi    > preloaded-images-k8s-v11-v1...: 229.89 MiB / 922.45 MiB  24.92% 154.50 Mi    > preloaded-images-k8s-v11-v1...: 262.92 MiB / 922.45 MiB  28.50% 154.50 Mi    > preloaded-images-k8s-v11-v1...: 295.93 MiB / 922.45 MiB  32.08% 155.85 Mi    > preloaded-images-k8s-v11-v1...: 331.06 MiB / 922.45 MiB  35.89% 155.85 Mi    > preloaded-images-k8s-v11-v1...: 360.93 MiB / 922.45 MiB  39.13% 155.85 Mi    > preloaded-images-k8s-v11-v1...: 392.00 MiB / 922.45 MiB  42.50% 156.11 Mi    > preloaded-images-k8s-v11-v1...: 416.00 MiB / 922.45 MiB  45.10% 156.11 Mi    > preloaded-images-k8s-v11-v1...: 446.08 MiB / 922.45 MiB  48.36% 156.11 Mi    > preloaded-images-k8s-v11-v1...: 478.61 MiB / 922.45 MiB  51.88% 155.37 Mi    > preloaded-images-k8s-v11-v1...: 504.00 MiB / 922.45 MiB  5
4.64% 155.37 Mi    > preloaded-images-k8s-v11-v1...: 531.56 MiB / 922.45 MiB  57.63% 155.37 Mi    > preloaded-images-k8s-v11-v1...: 556.50 MiB / 922.45 MiB  60.33% 153.72 Mi    > preloaded-images-k8s-v11-v1...: 585.55 MiB / 922.45 MiB  63.48% 153.72 Mi    > preloaded-images-k8s-v11-v1...: 617.90 MiB / 922.45 MiB  66.98% 153.72 Mi    > preloaded-images-k8s-v11-v1...: 648.48 MiB / 922.45 MiB  70.30% 153.69 Mi    > preloaded-images-k8s-v11-v1...: 679.95 MiB / 922.45 MiB  73.71% 153.69 Mi    > preloaded-images-k8s-v11-v1...: 705.52 MiB / 922.45 MiB  76.48% 153.69 Mi    > preloaded-images-k8s-v11-v1...: 733.41 MiB / 922.45 MiB  79.51% 152.91 Mi    > preloaded-images-k8s-v11-v1...: 757.27 MiB / 922.45 MiB  82.09% 152.91 Mi    > preloaded-images-k8s-v11-v1...: 778.85 MiB / 922.45 MiB  84.43% 152.91 Mi    > preloaded-images-k8s-v11-v1...: 806.12 MiB / 922.45 MiB  87.39% 150.86 Mi    > preloaded-images-k8s-v11-v1...: 828.15 MiB / 922.45 MiB  89.78% 150.86 Mi    > preloaded-images-k8s-v11-v1...: 856.96 MiB / 922.45 MiB
92.90% 150.86 Mi    > preloaded-images-k8s-v11-v1...: 880.00 MiB / 922.45 MiB  95.40% 149.06 Mi    > preloaded-images-k8s-v11-v1...: 911.26 MiB / 922.45 MiB  98.79% 149.06 Mi    > preloaded-images-k8s-v11-v1...: 922.45 MiB / 922.45 MiB  100.00% 146.76 ME0524 19:23:46.406762  100270 vm_assets.go:131] stat("/home/jenkins/minikube-integration/16573-71939/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.2-containerd-overlay2-amd64.tar.lz4") failed: stat /home/jenkins/minikube-integration/16573-71939/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.2-containerd-overlay2-amd64.tar.lz4: no such file or directory
	E0524 19:23:46.406810  100270 vm_assets.go:131] stat("/home/jenkins/minikube-integration/16573-71939/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.2-containerd-overlay2-amd64.tar.lz4") failed: stat /home/jenkins/minikube-integration/16573-71939/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.2-containerd-overlay2-amd64.tar.lz4: no such file or directory
	    > kubectl.sha256: 64 B / 64 B [--------------------------] 100.00% ? p/s 0s    > kubelet.sha256: 64 B / 64 B [--------------------------] 100.00% ? p/s 0s    > kubeadm.sha256: 64 B / 64 B [--------------------------] 100.00% ? p/s 0s    > kubectl: 27.87 MiB / 44.27 MiB [--------------->_________] 62.96% ? p/s ?    > kubelet: 24.93 MiB / 112.68 MiB [----->__________________] 22.12% ? p/s ?    > kubeadm: 18.48 MiB / 42.57 MiB [---------->______________] 43.42% ? p/s ?    > kubectl: 44.27 MiB / 44.27 MiB [-----------] 100.00% 658.35 MiB p/s 300ms    > kubelet: 62.56 MiB / 112.68 MiB [------------->__________] 55.52% ? p/s ?    > kubeadm: 42.57 MiB / 42.57 MiB [-----------] 100.00% 228.27 MiB p/s 400ms    > kubelet: 106.25 MiB / 112.68 MiB [--------------------->_] 94.29% ? p/s ?    > kubelet: 112.68 MiB / 112.68 MiB [---------] 100.00% 262.64 MiB p/s 600ms! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.2:$PATH kubeadm init --config /var/tmp
/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.21.2
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost running-upgrade-134012] and IPs [192.168.72.125 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost running-upgrade-134012] and IPs [192.168.72.125 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /run/containerd/containerd.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.21.2
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /run/containerd/containerd.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                       │
	│    * If the above advice does not help, please let us know:                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                         │
	│                                                                                       │
	│    * Please attach the following file to the GitHub issue:                            │
	│    * - /home/jenkins/minikube-integration/16573-71939/.minikube/logs/lastStart.txt    │
	│                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────╯
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.21.2
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /run/containerd/containerd.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Related issue: https://github.com/kubernetes/minikube/issues/4172

                                                
                                                
** /stderr **
version_upgrade_test.go:132: (dbg) Run:  /tmp/minikube-v1.22.0.2758915908.exe start -p running-upgrade-134012 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd
E0524 19:38:48.773915   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/ingress-addon-legacy-889011/client.crt: no such file or directory
E0524 19:39:02.538780   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/addons-934336/client.crt: no such file or directory
E0524 19:40:16.095053   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/functional-420572/client.crt: no such file or directory
E0524 19:40:33.043849   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/functional-420572/client.crt: no such file or directory
version_upgrade_test.go:132: (dbg) Non-zero exit: /tmp/minikube-v1.22.0.2758915908.exe start -p running-upgrade-134012 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd: exit status 109 (16m37.007847867s)

                                                
                                                
-- stdout --
	* [running-upgrade-134012] minikube v1.22.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=16573
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16573-71939/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	  - KUBECONFIG=/tmp/legacy_kubeconfig2305090336
	* Using the kvm2 driver based on existing profile
	* Starting control plane node running-upgrade-134012 in cluster running-upgrade-134012
	* Downloading Kubernetes v1.21.2 preload ...
	* Updating the running kvm2 "running-upgrade-134012" VM ...
	* Preparing Kubernetes v1.21.2 on containerd 1.4.4 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	    > preloaded-images-k8s-v11-v1...: 9.68 MiB / 922.45 MiB [>__] 1.05% ? p/s ?    > preloaded-images-k8s-v11-v1...: 39.47 MiB / 922.45 MiB [>_] 4.28% ? p/s ?    > preloaded-images-k8s-v11-v1...: 65.48 MiB / 922.45 MiB [>_] 7.10% ? p/s ?    > preloaded-images-k8s-v11-v1...: 92.46 MiB / 922.45 MiB  10.02% 137.97 MiB    > preloaded-images-k8s-v11-v1...: 120.02 MiB / 922.45 MiB  13.01% 137.97 Mi    > preloaded-images-k8s-v11-v1...: 147.73 MiB / 922.45 MiB  16.01% 137.97 Mi    > preloaded-images-k8s-v11-v1...: 175.77 MiB / 922.45 MiB  19.06% 138.02 Mi    > preloaded-images-k8s-v11-v1...: 204.56 MiB / 922.45 MiB  22.18% 138.02 Mi    > preloaded-images-k8s-v11-v1...: 228.69 MiB / 922.45 MiB  24.79% 138.02 Mi    > preloaded-images-k8s-v11-v1...: 256.00 MiB / 922.45 MiB  27.75% 137.73 Mi    > preloaded-images-k8s-v11-v1...: 280.00 MiB / 922.45 MiB  30.35% 137.73 Mi    > preloaded-images-k8s-v11-v1...: 309.02 MiB / 922.45 MiB  33.50% 137.73 Mi    > preloaded-images-k8s-v11-v1...: 341.66 MiB / 922.45 MiB  37.0
4% 138.07 Mi    > preloaded-images-k8s-v11-v1...: 376.00 MiB / 922.45 MiB  40.76% 138.07 Mi    > preloaded-images-k8s-v11-v1...: 400.03 MiB / 922.45 MiB  43.37% 138.07 Mi    > preloaded-images-k8s-v11-v1...: 422.45 MiB / 922.45 MiB  45.80% 137.85 Mi    > preloaded-images-k8s-v11-v1...: 456.26 MiB / 922.45 MiB  49.46% 137.85 Mi    > preloaded-images-k8s-v11-v1...: 483.70 MiB / 922.45 MiB  52.44% 137.85 Mi    > preloaded-images-k8s-v11-v1...: 504.52 MiB / 922.45 MiB  54.69% 137.78 Mi    > preloaded-images-k8s-v11-v1...: 539.08 MiB / 922.45 MiB  58.44% 137.78 Mi    > preloaded-images-k8s-v11-v1...: 566.48 MiB / 922.45 MiB  61.41% 137.78 Mi    > preloaded-images-k8s-v11-v1...: 600.13 MiB / 922.45 MiB  65.06% 139.16 Mi    > preloaded-images-k8s-v11-v1...: 644.02 MiB / 922.45 MiB  69.82% 139.16 Mi    > preloaded-images-k8s-v11-v1...: 675.03 MiB / 922.45 MiB  73.18% 139.16 Mi    > preloaded-images-k8s-v11-v1...: 707.94 MiB / 922.45 MiB  76.75% 141.79 Mi    > preloaded-images-k8s-v11-v1...: 749.51 MiB / 922.45 MiB  8
1.25% 141.79 Mi    > preloaded-images-k8s-v11-v1...: 781.16 MiB / 922.45 MiB  84.68% 141.79 Mi    > preloaded-images-k8s-v11-v1...: 806.84 MiB / 922.45 MiB  87.47% 143.28 Mi    > preloaded-images-k8s-v11-v1...: 841.59 MiB / 922.45 MiB  91.23% 143.28 Mi    > preloaded-images-k8s-v11-v1...: 882.95 MiB / 922.45 MiB  95.72% 143.28 Mi    > preloaded-images-k8s-v11-v1...: 922.45 MiB / 922.45 MiB  100.00% 154.14 ME0524 19:38:57.126590  110305 logs.go:267] Failed to list containers for "kube-proxy": crictl list: sudo crictl ps -a --quiet --name=kube-proxy: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-05-24T19:38:57Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0524 19:39:28.708651  110305 logs.go:267] Failed to list containers for "etcd": crictl list: sudo crictl ps -a --quiet --name=etcd: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-05-24T19:39:28Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0524 19:39:28.732703  110305 logs.go:267] Failed to list containers for "coredns": crictl list: sudo crictl ps -a --quiet --name=coredns: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-05-24T19:39:28Z" level=fatal msg="listing containers: rpc error: code = Unknown desc = server is not initialized yet"
	E0524 19:39:28.751641  110305 logs.go:267] Failed to list containers for "kube-scheduler": crictl list: sudo crictl ps -a --quiet --name=kube-scheduler: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-05-24T19:39:28Z" level=fatal msg="listing containers: rpc error: code = Unknown desc = server is not initialized yet"
	E0524 19:39:28.775566  110305 logs.go:267] Failed to list containers for "kube-proxy": crictl list: sudo crictl ps -a --quiet --name=kube-proxy: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-05-24T19:39:28Z" level=fatal msg="listing containers: rpc error: code = Unknown desc = server is not initialized yet"
	E0524 19:39:28.790605  110305 logs.go:267] Failed to list containers for "kubernetes-dashboard": crictl list: sudo crictl ps -a --quiet --name=kubernetes-dashboard: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-05-24T19:39:28Z" level=fatal msg="listing containers: rpc error: code = Unknown desc = server is not initialized yet"
	E0524 19:39:28.817698  110305 logs.go:267] Failed to list containers for "storage-provisioner": crictl list: sudo crictl ps -a --quiet --name=storage-provisioner: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-05-24T19:39:28Z" level=fatal msg="listing containers: rpc error: code = Unknown desc = server is not initialized yet"
	! Unable to restart cluster, will reset it: apiserver health: apiserver healthz never reported healthy: cluster wait timed out during healthz check
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.21.2
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /run/containerd/containerd.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	E0524 19:53:54.954979  110305 logs.go:267] Failed to list containers for "kube-scheduler": crictl list: sudo crictl ps -a --quiet --name=kube-scheduler: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-05-24T19:53:54Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	* 
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.21.2
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /run/containerd/containerd.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                       │
	│    * If the above advice does not help, please let us know:                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                         │
	│                                                                                       │
	│    * Please attach the following file to the GitHub issue:                            │
	│    * - /home/jenkins/minikube-integration/16573-71939/.minikube/logs/lastStart.txt    │
	│                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────╯
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.21.2
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /run/containerd/containerd.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Related issue: https://github.com/kubernetes/minikube/issues/4172

                                                
                                                
** /stderr **
version_upgrade_test.go:138: legacy v1.22.0 start failed: exit status 109
panic.go:522: *** TestRunningBinaryUpgrade FAILED at 2023-05-24 19:53:56.595595688 +0000 UTC m=+4653.613383111
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p running-upgrade-134012 -n running-upgrade-134012
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p running-upgrade-134012 -n running-upgrade-134012: exit status 6 (221.502233ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0524 19:53:56.804160  118077 status.go:415] kubeconfig endpoint: extract IP: "running-upgrade-134012" does not appear in /home/jenkins/minikube-integration/16573-71939/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "running-upgrade-134012" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "running-upgrade-134012" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-134012
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-134012: (1.294086718s)
--- FAIL: TestRunningBinaryUpgrade (1916.35s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (1675.16s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:195: (dbg) Run:  /tmp/minikube-v1.22.0.1826756098.exe start -p stopped-upgrade-849274 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd
E0524 19:26:51.823139   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/ingress-addon-legacy-889011/client.crt: no such file or directory
version_upgrade_test.go:195: (dbg) Non-zero exit: /tmp/minikube-v1.22.0.1826756098.exe start -p stopped-upgrade-849274 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd: exit status 109 (14m35.111874072s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-849274] minikube v1.22.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=16573
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16573-71939/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	  - KUBECONFIG=/tmp/legacy_kubeconfig602146571
	* Using the kvm2 driver based on user configuration
	* Starting control plane node stopped-upgrade-849274 in cluster stopped-upgrade-849274
	* Downloading Kubernetes v1.21.2 preload ...
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.21.2 on containerd 1.4.4 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	    > preloaded-images-k8s-v11-v1...: 8.00 MiB / 922.45 MiB [>__] 0.87% ? p/s ?    > preloaded-images-k8s-v11-v1...: 34.77 MiB / 922.45 MiB [>_] 3.77% ? p/s ?    > preloaded-images-k8s-v11-v1...: 69.68 MiB / 922.45 MiB [>_] 7.55% ? p/s ?    > preloaded-images-k8s-v11-v1...: 104.00 MiB / 922.45 MiB  11.27% 159.93 Mi    > preloaded-images-k8s-v11-v1...: 136.00 MiB / 922.45 MiB  14.74% 159.93 Mi    > preloaded-images-k8s-v11-v1...: 160.00 MiB / 922.45 MiB  17.35% 159.93 Mi    > preloaded-images-k8s-v11-v1...: 184.00 MiB / 922.45 MiB  19.95% 158.22 Mi    > preloaded-images-k8s-v11-v1...: 215.77 MiB / 922.45 MiB  23.39% 158.22 Mi    > preloaded-images-k8s-v11-v1...: 245.17 MiB / 922.45 MiB  26.58% 158.22 Mi    > preloaded-images-k8s-v11-v1...: 282.44 MiB / 922.45 MiB  30.62% 158.60 Mi    > preloaded-images-k8s-v11-v1...: 320.00 MiB / 922.45 MiB  34.69% 158.60 Mi    > preloaded-images-k8s-v11-v1...: 352.00 MiB / 922.45 MiB  38.16% 158.60 Mi    > preloaded-images-k8s-v11-v1...: 391.52 MiB / 922.45 MiB  42.4
4% 160.10 Mi    > preloaded-images-k8s-v11-v1...: 417.16 MiB / 922.45 MiB  45.22% 160.10 Mi    > preloaded-images-k8s-v11-v1...: 439.23 MiB / 922.45 MiB  47.62% 160.10 Mi    > preloaded-images-k8s-v11-v1...: 481.42 MiB / 922.45 MiB  52.19% 159.38 Mi    > preloaded-images-k8s-v11-v1...: 504.00 MiB / 922.45 MiB  54.64% 159.38 Mi    > preloaded-images-k8s-v11-v1...: 537.90 MiB / 922.45 MiB  58.31% 159.38 Mi    > preloaded-images-k8s-v11-v1...: 574.40 MiB / 922.45 MiB  62.27% 159.15 Mi    > preloaded-images-k8s-v11-v1...: 600.94 MiB / 922.45 MiB  65.15% 159.15 Mi    > preloaded-images-k8s-v11-v1...: 640.00 MiB / 922.45 MiB  69.38% 159.15 Mi    > preloaded-images-k8s-v11-v1...: 664.00 MiB / 922.45 MiB  71.98% 158.52 Mi    > preloaded-images-k8s-v11-v1...: 693.92 MiB / 922.45 MiB  75.23% 158.52 Mi    > preloaded-images-k8s-v11-v1...: 715.65 MiB / 922.45 MiB  77.58% 158.52 Mi    > preloaded-images-k8s-v11-v1...: 744.00 MiB / 922.45 MiB  80.65% 156.88 Mi    > preloaded-images-k8s-v11-v1...: 768.00 MiB / 922.45 MiB  8
3.26% 156.88 Mi    > preloaded-images-k8s-v11-v1...: 791.52 MiB / 922.45 MiB  85.81% 156.88 Mi    > preloaded-images-k8s-v11-v1...: 833.46 MiB / 922.45 MiB  90.35% 156.39 Mi    > preloaded-images-k8s-v11-v1...: 865.69 MiB / 922.45 MiB  93.85% 156.39 Mi    > preloaded-images-k8s-v11-v1...: 872.00 MiB / 922.45 MiB  94.53% 156.39 Mi    > preloaded-images-k8s-v11-v1...: 898.73 MiB / 922.45 MiB  97.43% 153.31 Mi    > preloaded-images-k8s-v11-v1...: 922.45 MiB / 922.45 MiB  100.00% 149.60 ME0524 19:26:59.145227  105330 vm_assets.go:131] stat("/home/jenkins/minikube-integration/16573-71939/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.2-containerd-overlay2-amd64.tar.lz4") failed: stat /home/jenkins/minikube-integration/16573-71939/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.2-containerd-overlay2-amd64.tar.lz4: no such file or directory
	E0524 19:26:59.145272  105330 vm_assets.go:131] stat("/home/jenkins/minikube-integration/16573-71939/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.2-containerd-overlay2-amd64.tar.lz4") failed: stat /home/jenkins/minikube-integration/16573-71939/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.2-containerd-overlay2-amd64.tar.lz4: no such file or directory
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.21.2
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost stopped-upgrade-849274] and IPs [192.168.39.50 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost stopped-upgrade-849274] and IPs [192.168.39.50 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /run/containerd/containerd.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.21.2
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /run/containerd/containerd.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                       │
	│    * If the above advice does not help, please let us know:                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                         │
	│                                                                                       │
	│    * Please attach the following file to the GitHub issue:                            │
	│    * - /home/jenkins/minikube-integration/16573-71939/.minikube/logs/lastStart.txt    │
	│                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────╯
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.21.2
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /run/containerd/containerd.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Related issue: https://github.com/kubernetes/minikube/issues/4172

                                                
                                                
** /stderr **
version_upgrade_test.go:195: (dbg) Run:  /tmp/minikube-v1.22.0.1826756098.exe start -p stopped-upgrade-849274 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:195: (dbg) Non-zero exit: /tmp/minikube-v1.22.0.1826756098.exe start -p stopped-upgrade-849274 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd: exit status 80 (4m57.561378191s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-849274] minikube v1.22.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=16573
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16573-71939/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	  - KUBECONFIG=/tmp/legacy_kubeconfig956685844
	* Using the kvm2 driver based on existing profile
	* Starting control plane node stopped-upgrade-849274 in cluster stopped-upgrade-849274
	* Updating the running kvm2 "stopped-upgrade-849274" VM ...
	* Preparing Kubernetes v1.21.2 on containerd 1.4.4 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0524 19:41:58.767961  112532 logs.go:267] Failed to list containers for "kube-apiserver": crictl list: sudo crictl ps -a --quiet --name=kube-apiserver: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-05-24T19:41:58Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0524 19:42:00.785816  112532 logs.go:267] Failed to list containers for "etcd": crictl list: sudo crictl ps -a --quiet --name=etcd: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-05-24T19:42:00Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0524 19:42:02.802863  112532 logs.go:267] Failed to list containers for "coredns": crictl list: sudo crictl ps -a --quiet --name=coredns: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-05-24T19:42:02Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0524 19:42:04.816786  112532 logs.go:267] Failed to list containers for "kube-scheduler": crictl list: sudo crictl ps -a --quiet --name=kube-scheduler: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-05-24T19:42:04Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0524 19:42:06.836669  112532 logs.go:267] Failed to list containers for "kube-proxy": crictl list: sudo crictl ps -a --quiet --name=kube-proxy: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-05-24T19:42:06Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0524 19:42:08.855456  112532 logs.go:267] Failed to list containers for "kubernetes-dashboard": crictl list: sudo crictl ps -a --quiet --name=kubernetes-dashboard: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-05-24T19:42:08Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0524 19:42:10.871827  112532 logs.go:267] Failed to list containers for "storage-provisioner": crictl list: sudo crictl ps -a --quiet --name=storage-provisioner: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-05-24T19:42:10Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0524 19:42:12.887801  112532 logs.go:267] Failed to list containers for "kube-controller-manager": crictl list: sudo crictl ps -a --quiet --name=kube-controller-manager: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-05-24T19:42:12Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0524 19:42:20.268337  112532 logs.go:267] Failed to list containers for "kube-apiserver": crictl list: sudo crictl ps -a --quiet --name=kube-apiserver: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-05-24T19:42:20Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0524 19:42:22.296226  112532 logs.go:267] Failed to list containers for "etcd": crictl list: sudo crictl ps -a --quiet --name=etcd: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-05-24T19:42:22Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0524 19:42:24.313968  112532 logs.go:267] Failed to list containers for "coredns": crictl list: sudo crictl ps -a --quiet --name=coredns: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-05-24T19:42:24Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0524 19:42:26.331588  112532 logs.go:267] Failed to list containers for "kube-scheduler": crictl list: sudo crictl ps -a --quiet --name=kube-scheduler: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-05-24T19:42:26Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0524 19:42:28.348651  112532 logs.go:267] Failed to list containers for "kube-proxy": crictl list: sudo crictl ps -a --quiet --name=kube-proxy: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-05-24T19:42:28Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0524 19:42:30.365560  112532 logs.go:267] Failed to list containers for "kubernetes-dashboard": crictl list: sudo crictl ps -a --quiet --name=kubernetes-dashboard: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-05-24T19:42:30Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0524 19:42:32.382587  112532 logs.go:267] Failed to list containers for "storage-provisioner": crictl list: sudo crictl ps -a --quiet --name=storage-provisioner: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-05-24T19:42:32Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0524 19:42:34.402194  112532 logs.go:267] Failed to list containers for "kube-controller-manager": crictl list: sudo crictl ps -a --quiet --name=kube-controller-manager: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-05-24T19:42:34Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0524 19:42:41.765721  112532 logs.go:267] Failed to list containers for "kube-apiserver": crictl list: sudo crictl ps -a --quiet --name=kube-apiserver: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-05-24T19:42:41Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0524 19:42:43.783837  112532 logs.go:267] Failed to list containers for "etcd": crictl list: sudo crictl ps -a --quiet --name=etcd: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-05-24T19:42:43Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0524 19:42:45.801943  112532 logs.go:267] Failed to list containers for "coredns": crictl list: sudo crictl ps -a --quiet --name=coredns: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-05-24T19:42:45Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0524 19:42:47.818495  112532 logs.go:267] Failed to list containers for "kube-scheduler": crictl list: sudo crictl ps -a --quiet --name=kube-scheduler: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-05-24T19:42:47Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0524 19:42:49.835812  112532 logs.go:267] Failed to list containers for "kube-proxy": crictl list: sudo crictl ps -a --quiet --name=kube-proxy: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-05-24T19:42:49Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0524 19:42:51.855168  112532 logs.go:267] Failed to list containers for "kubernetes-dashboard": crictl list: sudo crictl ps -a --quiet --name=kubernetes-dashboard: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-05-24T19:42:51Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0524 19:42:53.879045  112532 logs.go:267] Failed to list containers for "storage-provisioner": crictl list: sudo crictl ps -a --quiet --name=storage-provisioner: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-05-24T19:42:53Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0524 19:42:55.903268  112532 logs.go:267] Failed to list containers for "kube-controller-manager": crictl list: sudo crictl ps -a --quiet --name=kube-controller-manager: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-05-24T19:42:55Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0524 19:43:03.266423  112532 logs.go:267] Failed to list containers for "kube-apiserver": crictl list: sudo crictl ps -a --quiet --name=kube-apiserver: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-05-24T19:43:03Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0524 19:43:05.285243  112532 logs.go:267] Failed to list containers for "etcd": crictl list: sudo crictl ps -a --quiet --name=etcd: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-05-24T19:43:05Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0524 19:43:07.303482  112532 logs.go:267] Failed to list containers for "coredns": crictl list: sudo crictl ps -a --quiet --name=coredns: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-05-24T19:43:07Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0524 19:43:09.322862  112532 logs.go:267] Failed to list containers for "kube-scheduler": crictl list: sudo crictl ps -a --quiet --name=kube-scheduler: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-05-24T19:43:09Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0524 19:43:11.342166  112532 logs.go:267] Failed to list containers for "kube-proxy": crictl list: sudo crictl ps -a --quiet --name=kube-proxy: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-05-24T19:43:11Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0524 19:43:13.361127  112532 logs.go:267] Failed to list containers for "kubernetes-dashboard": crictl list: sudo crictl ps -a --quiet --name=kubernetes-dashboard: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-05-24T19:43:13Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0524 19:43:15.377907  112532 logs.go:267] Failed to list containers for "storage-provisioner": crictl list: sudo crictl ps -a --quiet --name=storage-provisioner: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-05-24T19:43:15Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0524 19:43:17.396186  112532 logs.go:267] Failed to list containers for "kube-controller-manager": crictl list: sudo crictl ps -a --quiet --name=kube-controller-manager: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-05-24T19:43:17Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0524 19:43:24.767393  112532 logs.go:267] Failed to list containers for "kube-apiserver": crictl list: sudo crictl ps -a --quiet --name=kube-apiserver: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-05-24T19:43:24Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0524 19:43:26.787749  112532 logs.go:267] Failed to list containers for "etcd": crictl list: sudo crictl ps -a --quiet --name=etcd: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-05-24T19:43:26Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0524 19:43:28.807040  112532 logs.go:267] Failed to list containers for "coredns": crictl list: sudo crictl ps -a --quiet --name=coredns: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-05-24T19:43:28Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0524 19:43:30.826705  112532 logs.go:267] Failed to list containers for "kube-scheduler": crictl list: sudo crictl ps -a --quiet --name=kube-scheduler: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-05-24T19:43:30Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0524 19:43:32.846995  112532 logs.go:267] Failed to list containers for "kube-proxy": crictl list: sudo crictl ps -a --quiet --name=kube-proxy: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-05-24T19:43:32Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0524 19:43:34.863490  112532 logs.go:267] Failed to list containers for "kubernetes-dashboard": crictl list: sudo crictl ps -a --quiet --name=kubernetes-dashboard: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-05-24T19:43:34Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0524 19:43:36.879516  112532 logs.go:267] Failed to list containers for "storage-provisioner": crictl list: sudo crictl ps -a --quiet --name=storage-provisioner: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-05-24T19:43:36Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0524 19:43:38.896845  112532 logs.go:267] Failed to list containers for "kube-controller-manager": crictl list: sudo crictl ps -a --quiet --name=kube-controller-manager: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-05-24T19:43:38Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0524 19:43:45.763391  112532 logs.go:267] Failed to list containers for "kube-apiserver": crictl list: sudo crictl ps -a --quiet --name=kube-apiserver: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-05-24T19:43:45Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0524 19:43:47.776585  112532 logs.go:267] Failed to list containers for "etcd": crictl list: sudo crictl ps -a --quiet --name=etcd: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-05-24T19:43:47Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0524 19:43:49.790936  112532 logs.go:267] Failed to list containers for "coredns": crictl list: sudo crictl ps -a --quiet --name=coredns: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-05-24T19:43:49Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0524 19:43:51.807504  112532 logs.go:267] Failed to list containers for "kube-scheduler": crictl list: sudo crictl ps -a --quiet --name=kube-scheduler: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-05-24T19:43:51Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0524 19:43:53.821609  112532 logs.go:267] Failed to list containers for "kube-proxy": crictl list: sudo crictl ps -a --quiet --name=kube-proxy: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-05-24T19:43:53Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0524 19:43:55.835651  112532 logs.go:267] Failed to list containers for "kubernetes-dashboard": crictl list: sudo crictl ps -a --quiet --name=kubernetes-dashboard: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-05-24T19:43:55Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0524 19:43:57.849976  112532 logs.go:267] Failed to list containers for "storage-provisioner": crictl list: sudo crictl ps -a --quiet --name=storage-provisioner: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-05-24T19:43:57Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0524 19:43:59.864802  112532 logs.go:267] Failed to list containers for "kube-controller-manager": crictl list: sudo crictl ps -a --quiet --name=kube-controller-manager: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-05-24T19:43:59Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0524 19:44:06.765478  112532 logs.go:267] Failed to list containers for "kube-apiserver": crictl list: sudo crictl ps -a --quiet --name=kube-apiserver: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-05-24T19:44:06Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0524 19:44:08.783542  112532 logs.go:267] Failed to list containers for "etcd": crictl list: sudo crictl ps -a --quiet --name=etcd: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-05-24T19:44:08Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0524 19:44:10.803724  112532 logs.go:267] Failed to list containers for "coredns": crictl list: sudo crictl ps -a --quiet --name=coredns: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-05-24T19:44:10Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0524 19:44:12.821235  112532 logs.go:267] Failed to list containers for "kube-scheduler": crictl list: sudo crictl ps -a --quiet --name=kube-scheduler: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-05-24T19:44:12Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0524 19:44:14.839719  112532 logs.go:267] Failed to list containers for "kube-proxy": crictl list: sudo crictl ps -a --quiet --name=kube-proxy: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-05-24T19:44:14Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0524 19:44:16.858555  112532 logs.go:267] Failed to list containers for "kubernetes-dashboard": crictl list: sudo crictl ps -a --quiet --name=kubernetes-dashboard: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-05-24T19:44:16Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0524 19:44:18.878856  112532 logs.go:267] Failed to list containers for "storage-provisioner": crictl list: sudo crictl ps -a --quiet --name=storage-provisioner: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-05-24T19:44:18Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0524 19:44:20.897623  112532 logs.go:267] Failed to list containers for "kube-controller-manager": crictl list: sudo crictl ps -a --quiet --name=kube-controller-manager: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-05-24T19:44:20Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0524 19:44:28.265761  112532 logs.go:267] Failed to list containers for "kube-apiserver": crictl list: sudo crictl ps -a --quiet --name=kube-apiserver: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-05-24T19:44:28Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0524 19:44:30.284621  112532 logs.go:267] Failed to list containers for "etcd": crictl list: sudo crictl ps -a --quiet --name=etcd: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-05-24T19:44:30Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0524 19:44:32.301101  112532 logs.go:267] Failed to list containers for "coredns": crictl list: sudo crictl ps -a --quiet --name=coredns: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-05-24T19:44:32Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0524 19:44:34.316727  112532 logs.go:267] Failed to list containers for "kube-scheduler": crictl list: sudo crictl ps -a --quiet --name=kube-scheduler: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-05-24T19:44:34Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0524 19:44:36.333105  112532 logs.go:267] Failed to list containers for "kube-proxy": crictl list: sudo crictl ps -a --quiet --name=kube-proxy: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-05-24T19:44:36Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0524 19:44:38.349474  112532 logs.go:267] Failed to list containers for "kubernetes-dashboard": crictl list: sudo crictl ps -a --quiet --name=kubernetes-dashboard: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-05-24T19:44:38Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0524 19:44:40.365589  112532 logs.go:267] Failed to list containers for "storage-provisioner": crictl list: sudo crictl ps -a --quiet --name=storage-provisioner: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-05-24T19:44:40Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0524 19:44:42.384443  112532 logs.go:267] Failed to list containers for "kube-controller-manager": crictl list: sudo crictl ps -a --quiet --name=kube-controller-manager: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-05-24T19:44:42Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0524 19:44:49.267970  112532 logs.go:267] Failed to list containers for "kube-apiserver": crictl list: sudo crictl ps -a --quiet --name=kube-apiserver: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-05-24T19:44:49Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0524 19:44:51.285984  112532 logs.go:267] Failed to list containers for "etcd": crictl list: sudo crictl ps -a --quiet --name=etcd: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-05-24T19:44:51Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0524 19:44:53.302894  112532 logs.go:267] Failed to list containers for "coredns": crictl list: sudo crictl ps -a --quiet --name=coredns: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-05-24T19:44:53Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0524 19:44:55.319546  112532 logs.go:267] Failed to list containers for "kube-scheduler": crictl list: sudo crictl ps -a --quiet --name=kube-scheduler: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-05-24T19:44:55Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0524 19:44:57.335588  112532 logs.go:267] Failed to list containers for "kube-proxy": crictl list: sudo crictl ps -a --quiet --name=kube-proxy: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-05-24T19:44:57Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0524 19:44:59.352035  112532 logs.go:267] Failed to list containers for "kubernetes-dashboard": crictl list: sudo crictl ps -a --quiet --name=kubernetes-dashboard: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-05-24T19:44:59Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0524 19:45:01.370628  112532 logs.go:267] Failed to list containers for "storage-provisioner": crictl list: sudo crictl ps -a --quiet --name=storage-provisioner: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-05-24T19:45:01Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0524 19:45:03.390249  112532 logs.go:267] Failed to list containers for "kube-controller-manager": crictl list: sudo crictl ps -a --quiet --name=kube-controller-manager: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-05-24T19:45:03Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	! Unable to restart cluster, will reset it: apiserver health: apiserver healthz never reported healthy: cluster wait timed out during healthz check
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.21.2
	[preflight] Running pre-flight checks
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase preflight: [preflight] Some fatal errors occurred:
		[ERROR CRI]: container runtime is not running: output: time="2023-05-24T19:45:15Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	, error: exit status 1
	[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	To see the stack trace of this error execute with --v=5 or higher
	
	E0524 19:45:23.726790  112532 logs.go:267] Failed to list containers for "kube-apiserver": crictl list: sudo crictl ps -a --quiet --name=kube-apiserver: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-05-24T19:45:23Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0524 19:45:25.743073  112532 logs.go:267] Failed to list containers for "etcd": crictl list: sudo crictl ps -a --quiet --name=etcd: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-05-24T19:45:25Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0524 19:45:27.756449  112532 logs.go:267] Failed to list containers for "coredns": crictl list: sudo crictl ps -a --quiet --name=coredns: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-05-24T19:45:27Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0524 19:45:29.770133  112532 logs.go:267] Failed to list containers for "kube-scheduler": crictl list: sudo crictl ps -a --quiet --name=kube-scheduler: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-05-24T19:45:29Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0524 19:45:31.783299  112532 logs.go:267] Failed to list containers for "kube-proxy": crictl list: sudo crictl ps -a --quiet --name=kube-proxy: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-05-24T19:45:31Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0524 19:45:33.796978  112532 logs.go:267] Failed to list containers for "kubernetes-dashboard": crictl list: sudo crictl ps -a --quiet --name=kubernetes-dashboard: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-05-24T19:45:33Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0524 19:45:35.810969  112532 logs.go:267] Failed to list containers for "storage-provisioner": crictl list: sudo crictl ps -a --quiet --name=storage-provisioner: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-05-24T19:45:35Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	E0524 19:45:37.826728  112532 logs.go:267] Failed to list containers for "kube-controller-manager": crictl list: sudo crictl ps -a --quiet --name=kube-controller-manager: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-05-24T19:45:37Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	* 
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.21.2
	[preflight] Running pre-flight checks
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase preflight: [preflight] Some fatal errors occurred:
		[ERROR CRI]: container runtime is not running: output: time="2023-05-24T19:45:21Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	, error: exit status 1
	[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                       │
	│    * If the above advice does not help, please let us know:                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                         │
	│                                                                                       │
	│    * Please attach the following file to the GitHub issue:                            │
	│    * - /home/jenkins/minikube-integration/16573-71939/.minikube/logs/lastStart.txt    │
	│                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────╯
	X Exiting due to GUEST_START: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.21.2
	[preflight] Running pre-flight checks
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase preflight: [preflight] Some fatal errors occurred:
		[ERROR CRI]: container runtime is not running: output: time="2023-05-24T19:45:21Z" level=fatal msg="connect: connect endpoint 'unix:///run/containerd/containerd.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	, error: exit status 1
	[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                       │
	│    * If the above advice does not help, please let us know:                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                         │
	│                                                                                       │
	│    * Please attach the following file to the GitHub issue:                            │
	│    * - /home/jenkins/minikube-integration/16573-71939/.minikube/logs/lastStart.txt    │
	│                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
version_upgrade_test.go:195: (dbg) Run:  /tmp/minikube-v1.22.0.1826756098.exe start -p stopped-upgrade-849274 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:195: (dbg) Non-zero exit: /tmp/minikube-v1.22.0.1826756098.exe start -p stopped-upgrade-849274 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd: exit status 109 (8m19.130168216s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-849274] minikube v1.22.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=16573
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16573-71939/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	  - KUBECONFIG=/tmp/legacy_kubeconfig4039355902
	* Using the kvm2 driver based on existing profile
	* Starting control plane node stopped-upgrade-849274 in cluster stopped-upgrade-849274
	* Downloading Kubernetes v1.21.2 preload ...
	* Updating the running kvm2 "stopped-upgrade-849274" VM ...
	* Preparing Kubernetes v1.21.2 on containerd 1.4.4 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	    > preloaded-images-k8s-v11-v1...: 24.00 MiB / 922.45 MiB [>_] 2.60% ? p/s ?    > preloaded-images-k8s-v11-v1...: 64.09 MiB / 922.45 MiB [>_] 6.95% ? p/s ?    > preloaded-images-k8s-v11-v1...: 104.00 MiB / 922.45 MiB [] 11.27% ? p/s ?    > preloaded-images-k8s-v11-v1...: 134.26 MiB / 922.45 MiB  14.55% 184.02 Mi    > preloaded-images-k8s-v11-v1...: 175.51 MiB / 922.45 MiB  19.03% 184.02 Mi    > preloaded-images-k8s-v11-v1...: 214.52 MiB / 922.45 MiB  23.26% 184.02 Mi    > preloaded-images-k8s-v11-v1...: 255.68 MiB / 922.45 MiB  27.72% 185.21 Mi    > preloaded-images-k8s-v11-v1...: 296.78 MiB / 922.45 MiB  32.17% 185.21 Mi    > preloaded-images-k8s-v11-v1...: 338.59 MiB / 922.45 MiB  36.71% 185.21 Mi    > preloaded-images-k8s-v11-v1...: 376.69 MiB / 922.45 MiB  40.84% 186.27 Mi    > preloaded-images-k8s-v11-v1...: 417.45 MiB / 922.45 MiB  45.25% 186.27 Mi    > preloaded-images-k8s-v11-v1...: 452.00 MiB / 922.45 MiB  49.00% 186.27 Mi    > preloaded-images-k8s-v11-v1...: 491.93 MiB / 922.45 MiB  53.3
3% 186.64 Mi    > preloaded-images-k8s-v11-v1...: 536.66 MiB / 922.45 MiB  58.18% 186.64 Mi    > preloaded-images-k8s-v11-v1...: 578.47 MiB / 922.45 MiB  62.71% 186.64 Mi    > preloaded-images-k8s-v11-v1...: 593.70 MiB / 922.45 MiB  64.36% 185.54 Mi    > preloaded-images-k8s-v11-v1...: 638.56 MiB / 922.45 MiB  69.22% 185.54 Mi    > preloaded-images-k8s-v11-v1...: 677.05 MiB / 922.45 MiB  73.40% 185.54 Mi    > preloaded-images-k8s-v11-v1...: 720.44 MiB / 922.45 MiB  78.10% 187.20 Mi    > preloaded-images-k8s-v11-v1...: 763.92 MiB / 922.45 MiB  82.81% 187.20 Mi    > preloaded-images-k8s-v11-v1...: 808.77 MiB / 922.45 MiB  87.68% 187.20 Mi    > preloaded-images-k8s-v11-v1...: 853.70 MiB / 922.45 MiB  92.55% 189.45 Mi    > preloaded-images-k8s-v11-v1...: 896.00 MiB / 922.45 MiB  97.13% 189.45 Mi    > preloaded-images-k8s-v11-v1...: 922.45 MiB / 922.45 MiB  100.00% 203.26 M! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.2:$PATH kubeadm init --config /var/
tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.21.2
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /run/containerd/containerd.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.21.2
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /run/containerd/containerd.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                       │
	│    * If the above advice does not help, please let us know:                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                         │
	│                                                                                       │
	│    * Please attach the following file to the GitHub issue:                            │
	│    * - /home/jenkins/minikube-integration/16573-71939/.minikube/logs/lastStart.txt    │
	│                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────╯
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.21.2
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /run/containerd/containerd.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Related issue: https://github.com/kubernetes/minikube/issues/4172

                                                
                                                
** /stderr **
version_upgrade_test.go:201: legacy v1.22.0 start failed: exit status 109
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (1675.16s)

                                                
                                    

Test pass (262/300)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 10.09
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.06
10 TestDownloadOnly/v1.27.2/json-events 5.15
11 TestDownloadOnly/v1.27.2/preload-exists 0
15 TestDownloadOnly/v1.27.2/LogsDuration 0.06
16 TestDownloadOnly/DeleteAll 0.12
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.11
19 TestBinaryMirror 0.53
20 TestOffline 106.09
22 TestAddons/Setup 143.29
24 TestAddons/parallel/Registry 15.53
25 TestAddons/parallel/Ingress 22.84
26 TestAddons/parallel/InspektorGadget 10.75
27 TestAddons/parallel/MetricsServer 6.06
28 TestAddons/parallel/HelmTiller 15.4
30 TestAddons/parallel/CSI 63.77
31 TestAddons/parallel/Headlamp 13.82
32 TestAddons/parallel/CloudSpanner 5.74
35 TestAddons/serial/GCPAuth/Namespaces 0.12
36 TestAddons/StoppedEnableDisable 92.01
37 TestCertOptions 55.87
38 TestCertExpiration 248.2
40 TestForceSystemdFlag 59.64
41 TestForceSystemdEnv 63.19
42 TestKVMDriverInstallOrUpdate 3.31
46 TestErrorSpam/setup 51.94
47 TestErrorSpam/start 0.33
48 TestErrorSpam/status 0.69
49 TestErrorSpam/pause 1.31
50 TestErrorSpam/unpause 1.47
51 TestErrorSpam/stop 1.5
54 TestFunctional/serial/CopySyncFile 0
55 TestFunctional/serial/StartWithProxy 96.69
56 TestFunctional/serial/AuditLog 0
57 TestFunctional/serial/SoftStart 6.01
58 TestFunctional/serial/KubeContext 0.04
59 TestFunctional/serial/KubectlGetPods 0.08
62 TestFunctional/serial/CacheCmd/cache/add_remote 3.32
63 TestFunctional/serial/CacheCmd/cache/add_local 1.5
64 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
65 TestFunctional/serial/CacheCmd/cache/list 0.04
66 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.21
67 TestFunctional/serial/CacheCmd/cache/cache_reload 1.93
68 TestFunctional/serial/CacheCmd/cache/delete 0.09
69 TestFunctional/serial/MinikubeKubectlCmd 0.1
70 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.09
71 TestFunctional/serial/ExtraConfig 42.34
72 TestFunctional/serial/ComponentHealth 0.07
73 TestFunctional/serial/LogsCmd 1.21
74 TestFunctional/serial/LogsFileCmd 1.21
76 TestFunctional/parallel/ConfigCmd 0.28
77 TestFunctional/parallel/DashboardCmd 12.85
78 TestFunctional/parallel/DryRun 0.27
79 TestFunctional/parallel/InternationalLanguage 0.17
80 TestFunctional/parallel/StatusCmd 1.09
84 TestFunctional/parallel/ServiceCmdConnect 10.58
85 TestFunctional/parallel/AddonsCmd 0.12
86 TestFunctional/parallel/PersistentVolumeClaim 44.4
88 TestFunctional/parallel/SSHCmd 0.45
89 TestFunctional/parallel/CpCmd 0.89
90 TestFunctional/parallel/MySQL 27.03
91 TestFunctional/parallel/FileSync 0.26
92 TestFunctional/parallel/CertSync 1.46
96 TestFunctional/parallel/NodeLabels 0.07
98 TestFunctional/parallel/NonActiveRuntimeDisabled 0.44
100 TestFunctional/parallel/License 0.17
101 TestFunctional/parallel/ProfileCmd/profile_not_create 0.3
111 TestFunctional/parallel/ProfileCmd/profile_list 0.33
112 TestFunctional/parallel/ServiceCmd/DeployApp 11.26
113 TestFunctional/parallel/ProfileCmd/profile_json_output 0.26
114 TestFunctional/parallel/Version/short 0.04
115 TestFunctional/parallel/Version/components 0.52
116 TestFunctional/parallel/ImageCommands/ImageListShort 0.22
117 TestFunctional/parallel/ImageCommands/ImageListTable 0.39
118 TestFunctional/parallel/ImageCommands/ImageListJson 0.35
119 TestFunctional/parallel/ImageCommands/ImageListYaml 0.23
120 TestFunctional/parallel/ImageCommands/ImageBuild 4.08
121 TestFunctional/parallel/ImageCommands/Setup 0.95
122 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.19
123 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 4.28
124 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 6.1
125 TestFunctional/parallel/MountCmd/any-port 7.92
126 TestFunctional/parallel/ServiceCmd/List 0.35
127 TestFunctional/parallel/ServiceCmd/JSONOutput 0.45
128 TestFunctional/parallel/ServiceCmd/HTTPS 0.33
129 TestFunctional/parallel/ServiceCmd/Format 0.34
130 TestFunctional/parallel/ServiceCmd/URL 0.31
131 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.73
132 TestFunctional/parallel/ImageCommands/ImageRemove 0.51
133 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 2.04
134 TestFunctional/parallel/MountCmd/specific-port 1.91
135 TestFunctional/parallel/MountCmd/VerifyCleanup 1.48
136 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.61
137 TestFunctional/parallel/UpdateContextCmd/no_changes 0.11
138 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.09
139 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.1
140 TestFunctional/delete_addon-resizer_images 0.07
141 TestFunctional/delete_my-image_image 0.01
142 TestFunctional/delete_minikube_cached_images 0.01
146 TestIngressAddonLegacy/StartLegacyK8sCluster 133.73
148 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 10.75
149 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.38
150 TestIngressAddonLegacy/serial/ValidateIngressAddons 38.52
153 TestJSONOutput/start/Command 76.4
154 TestJSONOutput/start/Audit 0
156 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
157 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
159 TestJSONOutput/pause/Command 0.58
160 TestJSONOutput/pause/Audit 0
162 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
163 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
165 TestJSONOutput/unpause/Command 0.57
166 TestJSONOutput/unpause/Audit 0
168 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
169 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
171 TestJSONOutput/stop/Command 7.09
172 TestJSONOutput/stop/Audit 0
174 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
175 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
176 TestErrorJSONOutput 0.18
181 TestMainNoArgs 0.04
182 TestMinikubeProfile 109.42
185 TestMountStart/serial/StartWithMountFirst 27.08
186 TestMountStart/serial/VerifyMountFirst 0.37
187 TestMountStart/serial/StartWithMountSecond 30.3
188 TestMountStart/serial/VerifyMountSecond 0.36
189 TestMountStart/serial/DeleteFirst 1.24
190 TestMountStart/serial/VerifyMountPostDelete 0.42
191 TestMountStart/serial/Stop 1.19
192 TestMountStart/serial/RestartStopped 22.98
193 TestMountStart/serial/VerifyMountPostStop 0.37
196 TestMultiNode/serial/FreshStart2Nodes 120.5
197 TestMultiNode/serial/DeployApp2Nodes 3.94
198 TestMultiNode/serial/PingHostFrom2Pods 0.8
199 TestMultiNode/serial/AddNode 52.38
200 TestMultiNode/serial/ProfileList 0.22
201 TestMultiNode/serial/CopyFile 7.11
202 TestMultiNode/serial/StopNode 2.05
203 TestMultiNode/serial/StartAfterStop 71.09
204 TestMultiNode/serial/RestartKeepsNodes 513.89
205 TestMultiNode/serial/DeleteNode 1.94
206 TestMultiNode/serial/StopMultiNode 184.07
207 TestMultiNode/serial/RestartMultiNode 235.26
208 TestMultiNode/serial/ValidateNameConflict 54.91
215 TestScheduledStopUnix 125.79
221 TestKubernetesUpgrade 238.68
224 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
225 TestNoKubernetes/serial/StartWithK8s 103.18
226 TestNoKubernetes/serial/StartWithStopK8s 16.17
234 TestNoKubernetes/serial/Start 63.84
242 TestNetworkPlugins/group/false 3.58
246 TestNoKubernetes/serial/VerifyK8sNotRunning 0.28
247 TestNoKubernetes/serial/ProfileList 32.89
248 TestNoKubernetes/serial/Stop 1.23
249 TestNoKubernetes/serial/StartNoArgs 24.94
250 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.24
251 TestStoppedBinaryUpgrade/Setup 0.43
254 TestPause/serial/Start 73.98
255 TestPause/serial/SecondStartNoReconfiguration 7.92
257 TestStartStop/group/old-k8s-version/serial/FirstStart 169.38
258 TestPause/serial/Pause 0.86
259 TestPause/serial/VerifyStatus 0.31
260 TestPause/serial/Unpause 0.73
261 TestPause/serial/PauseAgain 0.94
262 TestPause/serial/DeletePaused 1.2
263 TestPause/serial/VerifyDeletedResources 30.9
265 TestStartStop/group/no-preload/serial/FirstStart 132.61
266 TestStartStop/group/old-k8s-version/serial/DeployApp 9.65
267 TestStartStop/group/no-preload/serial/DeployApp 8.66
268 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.23
269 TestStartStop/group/old-k8s-version/serial/Stop 92.38
270 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.39
271 TestStartStop/group/no-preload/serial/Stop 92.6
272 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.19
273 TestStartStop/group/old-k8s-version/serial/SecondStart 501.26
274 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.22
275 TestStartStop/group/no-preload/serial/SecondStart 661.57
276 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.02
277 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.09
278 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.27
279 TestStartStop/group/old-k8s-version/serial/Pause 2.9
281 TestStartStop/group/embed-certs/serial/FirstStart 108.72
282 TestStartStop/group/embed-certs/serial/DeployApp 8.61
283 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.29
284 TestStartStop/group/embed-certs/serial/Stop 91.96
285 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 5.03
286 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.11
287 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.28
288 TestStartStop/group/no-preload/serial/Pause 2.91
290 TestStartStop/group/newest-cni/serial/FirstStart 69.35
291 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.16
292 TestStartStop/group/embed-certs/serial/SecondStart 635.66
293 TestStartStop/group/newest-cni/serial/DeployApp 0
294 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.64
295 TestStartStop/group/newest-cni/serial/Stop 7.14
296 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.25
297 TestStartStop/group/newest-cni/serial/SecondStart 87.35
298 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
299 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
300 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.3
301 TestStartStop/group/newest-cni/serial/Pause 3.13
303 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 114.2
304 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.73
305 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.21
306 TestStartStop/group/default-k8s-diff-port/serial/Stop 91.92
307 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.24
308 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 400.71
309 TestNetworkPlugins/group/auto/Start 74.21
310 TestStoppedBinaryUpgrade/MinikubeLogs 0.88
311 TestNetworkPlugins/group/kindnet/Start 83.27
312 TestNetworkPlugins/group/auto/KubeletFlags 0.22
313 TestNetworkPlugins/group/auto/NetCatPod 11.44
314 TestNetworkPlugins/group/auto/DNS 0.18
315 TestNetworkPlugins/group/auto/Localhost 0.14
316 TestNetworkPlugins/group/auto/HairPin 0.14
317 TestNetworkPlugins/group/calico/Start 102.45
318 TestNetworkPlugins/group/kindnet/ControllerPod 5.02
319 TestNetworkPlugins/group/kindnet/KubeletFlags 0.2
320 TestNetworkPlugins/group/kindnet/NetCatPod 11.47
321 TestNetworkPlugins/group/kindnet/DNS 0.17
322 TestNetworkPlugins/group/kindnet/Localhost 0.14
323 TestNetworkPlugins/group/kindnet/HairPin 0.13
324 TestNetworkPlugins/group/custom-flannel/Start 94
325 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 5.02
326 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.08
327 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.28
328 TestStartStop/group/embed-certs/serial/Pause 2.39
329 TestNetworkPlugins/group/enable-default-cni/Start 107.55
330 TestNetworkPlugins/group/calico/ControllerPod 5.03
331 TestNetworkPlugins/group/calico/KubeletFlags 0.22
332 TestNetworkPlugins/group/calico/NetCatPod 10.55
333 TestNetworkPlugins/group/calico/DNS 0.26
334 TestNetworkPlugins/group/calico/Localhost 0.17
335 TestNetworkPlugins/group/calico/HairPin 0.16
336 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.26
337 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.51
338 TestNetworkPlugins/group/flannel/Start 89.63
339 TestNetworkPlugins/group/custom-flannel/DNS 0.19
340 TestNetworkPlugins/group/custom-flannel/Localhost 0.14
341 TestNetworkPlugins/group/custom-flannel/HairPin 0.16
342 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 16.02
343 TestNetworkPlugins/group/bridge/Start 73.85
344 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.09
345 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.22
346 TestNetworkPlugins/group/enable-default-cni/NetCatPod 15.62
347 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.25
348 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.8
349 TestNetworkPlugins/group/enable-default-cni/DNS 0.19
350 TestNetworkPlugins/group/enable-default-cni/Localhost 0.17
351 TestNetworkPlugins/group/enable-default-cni/HairPin 0.17
352 TestNetworkPlugins/group/flannel/ControllerPod 5.02
353 TestNetworkPlugins/group/flannel/KubeletFlags 0.2
354 TestNetworkPlugins/group/flannel/NetCatPod 11.4
355 TestNetworkPlugins/group/bridge/KubeletFlags 0.21
356 TestNetworkPlugins/group/bridge/NetCatPod 9.42
357 TestNetworkPlugins/group/bridge/DNS 0.16
358 TestNetworkPlugins/group/bridge/Localhost 0.14
359 TestNetworkPlugins/group/bridge/HairPin 0.15
360 TestNetworkPlugins/group/flannel/DNS 0.17
361 TestNetworkPlugins/group/flannel/Localhost 0.15
362 TestNetworkPlugins/group/flannel/HairPin 0.14
x
+
TestDownloadOnly/v1.16.0/json-events (10.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-307327 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-307327 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd: (10.089079961s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (10.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-307327
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-307327: exit status 85 (58.697065ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-307327 | jenkins | v1.30.1 | 24 May 23 18:36 UTC |          |
	|         | -p download-only-307327        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/05/24 18:36:23
	Running on machine: ubuntu-20-agent-12
	Binary: Built with gc go1.20.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0524 18:36:23.054766   79165 out.go:296] Setting OutFile to fd 1 ...
	I0524 18:36:23.054893   79165 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0524 18:36:23.054903   79165 out.go:309] Setting ErrFile to fd 2...
	I0524 18:36:23.054908   79165 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0524 18:36:23.055014   79165 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16573-71939/.minikube/bin
	W0524 18:36:23.055122   79165 root.go:312] Error reading config file at /home/jenkins/minikube-integration/16573-71939/.minikube/config/config.json: open /home/jenkins/minikube-integration/16573-71939/.minikube/config/config.json: no such file or directory
	I0524 18:36:23.055637   79165 out.go:303] Setting JSON to true
	I0524 18:36:23.056471   79165 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":8318,"bootTime":1684945065,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1034-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0524 18:36:23.056527   79165 start.go:135] virtualization: kvm guest
	I0524 18:36:23.059351   79165 out.go:97] [download-only-307327] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)
	I0524 18:36:23.060985   79165 out.go:169] MINIKUBE_LOCATION=16573
	W0524 18:36:23.059465   79165 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/16573-71939/.minikube/cache/preloaded-tarball: no such file or directory
	I0524 18:36:23.059525   79165 notify.go:220] Checking for updates...
	I0524 18:36:23.064110   79165 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0524 18:36:23.066047   79165 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/16573-71939/kubeconfig
	I0524 18:36:23.067760   79165 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/16573-71939/.minikube
	I0524 18:36:23.069274   79165 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0524 18:36:23.072117   79165 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0524 18:36:23.072283   79165 driver.go:375] Setting default libvirt URI to qemu:///system
	I0524 18:36:23.106432   79165 out.go:97] Using the kvm2 driver based on user configuration
	I0524 18:36:23.106450   79165 start.go:295] selected driver: kvm2
	I0524 18:36:23.106455   79165 start.go:870] validating driver "kvm2" against <nil>
	I0524 18:36:23.106712   79165 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0524 18:36:23.106795   79165 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/16573-71939/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0524 18:36:23.121221   79165 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.30.1
	I0524 18:36:23.121267   79165 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0524 18:36:23.121702   79165 start_flags.go:382] Using suggested 6000MB memory alloc based on sys=32101MB, container=0MB
	I0524 18:36:23.121836   79165 start_flags.go:897] Wait components to verify : map[apiserver:true system_pods:true]
	I0524 18:36:23.121888   79165 cni.go:84] Creating CNI manager for ""
	I0524 18:36:23.121902   79165 cni.go:145] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0524 18:36:23.121907   79165 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0524 18:36:23.121913   79165 start_flags.go:319] config:
	{Name:download-only-307327 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684536746-16501@sha256:f5d93abf1d1cfb142a7cf0b58b24029595d621e5f943105b16c61199094d77de Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-307327 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunt
ime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0524 18:36:23.122075   79165 iso.go:125] acquiring lock: {Name:mk070acfedcbbaf2c11bfabff12ffb52c449689f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0524 18:36:23.123976   79165 out.go:97] Downloading VM boot image ...
	I0524 18:36:23.124030   79165 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/16501/minikube-v1.30.1-1684536668-16501-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/16501/minikube-v1.30.1-1684536668-16501-amd64.iso.sha256 -> /home/jenkins/minikube-integration/16573-71939/.minikube/cache/iso/amd64/minikube-v1.30.1-1684536668-16501-amd64.iso
	I0524 18:36:26.263858   79165 out.go:97] Starting control plane node download-only-307327 in cluster download-only-307327
	I0524 18:36:26.263922   79165 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I0524 18:36:26.290881   79165 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4
	I0524 18:36:26.290919   79165 cache.go:57] Caching tarball of preloaded images
	I0524 18:36:26.291085   79165 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I0524 18:36:26.292863   79165 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0524 18:36:26.292879   79165 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4 ...
	I0524 18:36:26.328338   79165 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4?checksum=md5:d96a2b2afa188e17db7ddabb58d563fd -> /home/jenkins/minikube-integration/16573-71939/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4
	I0524 18:36:31.761434   79165 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4 ...
	I0524 18:36:31.761516   79165 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/16573-71939/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4 ...
	I0524 18:36:32.595202   79165 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on containerd
	I0524 18:36:32.595535   79165 profile.go:148] Saving config to /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/download-only-307327/config.json ...
	I0524 18:36:32.595564   79165 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/download-only-307327/config.json: {Name:mk09829302ed54b9e58b0910484c373592d5d056 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 18:36:32.595720   79165 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I0524 18:36:32.595909   79165 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/linux/amd64/kubectl.sha1 -> /home/jenkins/minikube-integration/16573-71939/.minikube/cache/linux/amd64/v1.16.0/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-307327"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.2/json-events (5.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.2/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-307327 --force --alsologtostderr --kubernetes-version=v1.27.2 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-307327 --force --alsologtostderr --kubernetes-version=v1.27.2 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd: (5.151749726s)
--- PASS: TestDownloadOnly/v1.27.2/json-events (5.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.2/preload-exists
--- PASS: TestDownloadOnly/v1.27.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.2/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.2/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-307327
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-307327: exit status 85 (55.622715ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-307327 | jenkins | v1.30.1 | 24 May 23 18:36 UTC |          |
	|         | -p download-only-307327        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-307327 | jenkins | v1.30.1 | 24 May 23 18:36 UTC |          |
	|         | -p download-only-307327        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.27.2   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/05/24 18:36:33
	Running on machine: ubuntu-20-agent-12
	Binary: Built with gc go1.20.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0524 18:36:33.202848   79221 out.go:296] Setting OutFile to fd 1 ...
	I0524 18:36:33.202943   79221 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0524 18:36:33.202951   79221 out.go:309] Setting ErrFile to fd 2...
	I0524 18:36:33.202955   79221 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0524 18:36:33.203054   79221 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16573-71939/.minikube/bin
	W0524 18:36:33.203161   79221 root.go:312] Error reading config file at /home/jenkins/minikube-integration/16573-71939/.minikube/config/config.json: open /home/jenkins/minikube-integration/16573-71939/.minikube/config/config.json: no such file or directory
	I0524 18:36:33.203525   79221 out.go:303] Setting JSON to true
	I0524 18:36:33.204284   79221 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":8328,"bootTime":1684945065,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1034-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0524 18:36:33.204377   79221 start.go:135] virtualization: kvm guest
	I0524 18:36:33.206831   79221 out.go:97] [download-only-307327] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)
	I0524 18:36:33.208670   79221 out.go:169] MINIKUBE_LOCATION=16573
	I0524 18:36:33.206975   79221 notify.go:220] Checking for updates...
	I0524 18:36:33.211947   79221 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0524 18:36:33.213639   79221 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/16573-71939/kubeconfig
	I0524 18:36:33.215180   79221 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/16573-71939/.minikube
	I0524 18:36:33.216936   79221 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0524 18:36:33.219709   79221 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0524 18:36:33.220105   79221 config.go:182] Loaded profile config "download-only-307327": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	W0524 18:36:33.220152   79221 start.go:778] api.Load failed for download-only-307327: filestore "download-only-307327": Docker machine "download-only-307327" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0524 18:36:33.220213   79221 driver.go:375] Setting default libvirt URI to qemu:///system
	W0524 18:36:33.220248   79221 start.go:778] api.Load failed for download-only-307327: filestore "download-only-307327": Docker machine "download-only-307327" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0524 18:36:33.250073   79221 out.go:97] Using the kvm2 driver based on existing profile
	I0524 18:36:33.250095   79221 start.go:295] selected driver: kvm2
	I0524 18:36:33.250102   79221 start.go:870] validating driver "kvm2" against &{Name:download-only-307327 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16501/minikube-v1.30.1-1684536668-16501-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684536746-16501@sha256:f5d93abf1d1cfb142a7cf0b58b24029595d621e5f943105b16c61199094d77de Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.16.0 ClusterName:download-only-307327 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0524 18:36:33.250441   79221 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0524 18:36:33.250526   79221 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/16573-71939/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0524 18:36:33.263997   79221 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.30.1
	I0524 18:36:33.264580   79221 cni.go:84] Creating CNI manager for ""
	I0524 18:36:33.264599   79221 cni.go:145] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0524 18:36:33.264607   79221 start_flags.go:319] config:
	{Name:download-only-307327 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16501/minikube-v1.30.1-1684536668-16501-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684536746-16501@sha256:f5d93abf1d1cfb142a7cf0b58b24029595d621e5f943105b16c61199094d77de Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:download-only-307327 Namespace:defa
ult APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0524 18:36:33.264719   79221 iso.go:125] acquiring lock: {Name:mk070acfedcbbaf2c11bfabff12ffb52c449689f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0524 18:36:33.266544   79221 out.go:97] Starting control plane node download-only-307327 in cluster download-only-307327
	I0524 18:36:33.266562   79221 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime containerd
	I0524 18:36:33.296582   79221 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.27.2/preloaded-images-k8s-v18-v1.27.2-containerd-overlay2-amd64.tar.lz4
	I0524 18:36:33.296616   79221 cache.go:57] Caching tarball of preloaded images
	I0524 18:36:33.296759   79221 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime containerd
	I0524 18:36:33.298602   79221 out.go:97] Downloading Kubernetes v1.27.2 preload ...
	I0524 18:36:33.298616   79221 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.27.2-containerd-overlay2-amd64.tar.lz4 ...
	I0524 18:36:33.327751   79221 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.27.2/preloaded-images-k8s-v18-v1.27.2-containerd-overlay2-amd64.tar.lz4?checksum=md5:2b54c0e28812c8d64cf43888ed9073ac -> /home/jenkins/minikube-integration/16573-71939/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-containerd-overlay2-amd64.tar.lz4
	I0524 18:36:36.798106   79221 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.27.2-containerd-overlay2-amd64.tar.lz4 ...
	I0524 18:36:36.798195   79221 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/16573-71939/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-containerd-overlay2-amd64.tar.lz4 ...
	I0524 18:36:37.615703   79221 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on containerd
	I0524 18:36:37.615837   79221 profile.go:148] Saving config to /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/download-only-307327/config.json ...
	I0524 18:36:37.616032   79221 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime containerd
	I0524 18:36:37.616242   79221 download.go:107] Downloading: https://dl.k8s.io/release/v1.27.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.27.2/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/16573-71939/.minikube/cache/linux/amd64/v1.27.2/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-307327"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.27.2/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:187: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.12s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:199: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-307327
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.11s)

                                                
                                    
x
+
TestBinaryMirror (0.53s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:304: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-785125 --alsologtostderr --binary-mirror http://127.0.0.1:45125 --driver=kvm2  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-785125" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-785125
--- PASS: TestBinaryMirror (0.53s)

                                                
                                    
x
+
TestOffline (106.09s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-containerd-027677 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=containerd
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-containerd-027677 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=containerd: (1m44.926288262s)
helpers_test.go:175: Cleaning up "offline-containerd-027677" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-containerd-027677
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-containerd-027677: (1.160949697s)
--- PASS: TestOffline (106.09s)

                                                
                                    
x
+
TestAddons/Setup (143.29s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:88: (dbg) Run:  out/minikube-linux-amd64 start -p addons-934336 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=kvm2  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:88: (dbg) Done: out/minikube-linux-amd64 start -p addons-934336 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=kvm2  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m23.286477274s)
--- PASS: TestAddons/Setup (143.29s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.53s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:306: registry stabilized in 35.709296ms
addons_test.go:308: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-l28pq" [76a89e73-674c-4ceb-a938-20853140be77] Running
addons_test.go:308: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.016257393s
addons_test.go:311: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-6hgpk" [63969966-6207-4c99-bdef-86f46cd3be74] Running
addons_test.go:311: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.011971307s
addons_test.go:316: (dbg) Run:  kubectl --context addons-934336 delete po -l run=registry-test --now
addons_test.go:321: (dbg) Run:  kubectl --context addons-934336 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:321: (dbg) Done: kubectl --context addons-934336 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.814800278s)
addons_test.go:335: (dbg) Run:  out/minikube-linux-amd64 -p addons-934336 ip
2023/05/24 18:39:17 [DEBUG] GET http://192.168.39.107:5000
addons_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p addons-934336 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.53s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (22.84s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:183: (dbg) Run:  kubectl --context addons-934336 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:208: (dbg) Run:  kubectl --context addons-934336 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:221: (dbg) Run:  kubectl --context addons-934336 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:226: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [10e62690-bbe3-47a5-8e38-76220881be11] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [10e62690-bbe3-47a5-8e38-76220881be11] Running
addons_test.go:226: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.012595687s
addons_test.go:238: (dbg) Run:  out/minikube-linux-amd64 -p addons-934336 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Run:  kubectl --context addons-934336 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:267: (dbg) Run:  out/minikube-linux-amd64 -p addons-934336 ip
addons_test.go:273: (dbg) Run:  nslookup hello-john.test 192.168.39.107
addons_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p addons-934336 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p addons-934336 addons disable ingress-dns --alsologtostderr -v=1: (1.914089067s)
addons_test.go:287: (dbg) Run:  out/minikube-linux-amd64 -p addons-934336 addons disable ingress --alsologtostderr -v=1
addons_test.go:287: (dbg) Done: out/minikube-linux-amd64 -p addons-934336 addons disable ingress --alsologtostderr -v=1: (7.68950216s)
--- PASS: TestAddons/parallel/Ingress (22.84s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.75s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:814: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-98mg8" [adb4aaae-2eb3-4326-986b-59e3973632a5] Running
addons_test.go:814: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.039605119s
addons_test.go:817: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-934336
addons_test.go:817: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-934336: (5.706394549s)
--- PASS: TestAddons/parallel/InspektorGadget (10.75s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.06s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:383: metrics-server stabilized in 36.560274ms
addons_test.go:385: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-844d8db974-lksft" [b3efa348-345a-4366-b4df-370df3750731] Running
addons_test.go:385: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.013754821s
addons_test.go:391: (dbg) Run:  kubectl --context addons-934336 top pods -n kube-system
addons_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p addons-934336 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.06s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (15.4s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:432: tiller-deploy stabilized in 4.148196ms
addons_test.go:434: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-6847666dc-gfwp9" [5db9a0f1-601d-455d-972e-fc1caf3fe0fe] Running
addons_test.go:434: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.015646529s
addons_test.go:449: (dbg) Run:  kubectl --context addons-934336 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:449: (dbg) Done: kubectl --context addons-934336 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (5.915728016s)
addons_test.go:454: kubectl --context addons-934336 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: unexpected stderr: Unable to use a TTY - input is not a terminal or the right kind of file
If you don't see a command prompt, try pressing enter.
warning: couldn't attach to pod/helm-test, falling back to streaming logs: 
addons_test.go:449: (dbg) Run:  kubectl --context addons-934336 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:449: (dbg) Done: kubectl --context addons-934336 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (3.29104154s)
addons_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p addons-934336 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (15.40s)

                                                
                                    
x
+
TestAddons/parallel/CSI (63.77s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:537: csi-hostpath-driver pods stabilized in 8.040411ms
addons_test.go:540: (dbg) Run:  kubectl --context addons-934336 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:545: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-934336 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-934336 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-934336 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-934336 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-934336 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-934336 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-934336 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-934336 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-934336 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-934336 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-934336 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-934336 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-934336 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-934336 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-934336 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-934336 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-934336 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:550: (dbg) Run:  kubectl --context addons-934336 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:555: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [f4f3ab59-b371-4cbb-97dc-f36456cc3e51] Pending
helpers_test.go:344: "task-pv-pod" [f4f3ab59-b371-4cbb-97dc-f36456cc3e51] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [f4f3ab59-b371-4cbb-97dc-f36456cc3e51] Running
addons_test.go:555: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 14.011510149s
addons_test.go:560: (dbg) Run:  kubectl --context addons-934336 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:565: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-934336 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-934336 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:570: (dbg) Run:  kubectl --context addons-934336 delete pod task-pv-pod
addons_test.go:570: (dbg) Done: kubectl --context addons-934336 delete pod task-pv-pod: (1.294996231s)
addons_test.go:576: (dbg) Run:  kubectl --context addons-934336 delete pvc hpvc
addons_test.go:582: (dbg) Run:  kubectl --context addons-934336 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:587: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-934336 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-934336 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-934336 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-934336 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-934336 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-934336 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-934336 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-934336 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-934336 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-934336 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-934336 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-934336 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-934336 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-934336 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-934336 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:592: (dbg) Run:  kubectl --context addons-934336 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:597: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [3bafe4c1-8886-4ce8-b979-170531772ed6] Pending
helpers_test.go:344: "task-pv-pod-restore" [3bafe4c1-8886-4ce8-b979-170531772ed6] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [3bafe4c1-8886-4ce8-b979-170531772ed6] Running
addons_test.go:597: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.009079335s
addons_test.go:602: (dbg) Run:  kubectl --context addons-934336 delete pod task-pv-pod-restore
addons_test.go:606: (dbg) Run:  kubectl --context addons-934336 delete pvc hpvc-restore
addons_test.go:610: (dbg) Run:  kubectl --context addons-934336 delete volumesnapshot new-snapshot-demo
addons_test.go:614: (dbg) Run:  out/minikube-linux-amd64 -p addons-934336 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:614: (dbg) Done: out/minikube-linux-amd64 -p addons-934336 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.537957803s)
addons_test.go:618: (dbg) Run:  out/minikube-linux-amd64 -p addons-934336 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (63.77s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (13.82s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:800: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-934336 --alsologtostderr -v=1
addons_test.go:800: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-934336 --alsologtostderr -v=1: (1.78924083s)
addons_test.go:805: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-6b5756787-t7jsw" [ee7d6829-4752-49c5-8ce2-a04da6b122ba] Pending
helpers_test.go:344: "headlamp-6b5756787-t7jsw" [ee7d6829-4752-49c5-8ce2-a04da6b122ba] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-6b5756787-t7jsw" [ee7d6829-4752-49c5-8ce2-a04da6b122ba] Running
addons_test.go:805: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.026792728s
--- PASS: TestAddons/parallel/Headlamp (13.82s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.74s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:833: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-cf587f8d-md2fp" [349a486a-be60-4952-84c2-e414208baca6] Running
addons_test.go:833: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.012786572s
addons_test.go:836: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-934336
--- PASS: TestAddons/parallel/CloudSpanner (5.74s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:626: (dbg) Run:  kubectl --context addons-934336 create ns new-namespace
addons_test.go:640: (dbg) Run:  kubectl --context addons-934336 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (92.01s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:148: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-934336
addons_test.go:148: (dbg) Done: out/minikube-linux-amd64 stop -p addons-934336: (1m31.811454897s)
addons_test.go:152: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-934336
addons_test.go:156: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-934336
addons_test.go:161: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-934336
--- PASS: TestAddons/StoppedEnableDisable (92.01s)

                                                
                                    
x
+
TestCertOptions (55.87s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-243266 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-243266 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=containerd: (54.35004749s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-243266 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-243266 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-243266 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-243266" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-243266
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-243266: (1.023767959s)
--- PASS: TestCertOptions (55.87s)

                                                
                                    
x
+
TestCertExpiration (248.2s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-389748 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-389748 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=containerd: (58.937727724s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-389748 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-389748 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=containerd: (8.004488293s)
helpers_test.go:175: Cleaning up "cert-expiration-389748" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-389748
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-389748: (1.255222702s)
--- PASS: TestCertExpiration (248.20s)

                                                
                                    
x
+
TestForceSystemdFlag (59.64s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-825994 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd
docker_test.go:85: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-825994 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (58.137892466s)
docker_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-825994 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-825994" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-825994
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-825994: (1.2756523s)
--- PASS: TestForceSystemdFlag (59.64s)

                                                
                                    
x
+
TestForceSystemdEnv (63.19s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:149: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-808109 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd
E0524 19:23:48.772979   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/ingress-addon-legacy-889011/client.crt: no such file or directory
docker_test.go:149: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-808109 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (1m1.674507112s)
docker_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-808109 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-808109" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-808109
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-808109: (1.221914443s)
--- PASS: TestForceSystemdEnv (63.19s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (3.31s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (3.31s)

                                                
                                    
x
+
TestErrorSpam/setup (51.94s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-088320 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-088320 --driver=kvm2  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-088320 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-088320 --driver=kvm2  --container-runtime=containerd: (51.942136445s)
--- PASS: TestErrorSpam/setup (51.94s)

                                                
                                    
x
+
TestErrorSpam/start (0.33s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-088320 --log_dir /tmp/nospam-088320 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-088320 --log_dir /tmp/nospam-088320 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-088320 --log_dir /tmp/nospam-088320 start --dry-run
--- PASS: TestErrorSpam/start (0.33s)

                                                
                                    
x
+
TestErrorSpam/status (0.69s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-088320 --log_dir /tmp/nospam-088320 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-088320 --log_dir /tmp/nospam-088320 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-088320 --log_dir /tmp/nospam-088320 status
--- PASS: TestErrorSpam/status (0.69s)

                                                
                                    
x
+
TestErrorSpam/pause (1.31s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-088320 --log_dir /tmp/nospam-088320 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-088320 --log_dir /tmp/nospam-088320 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-088320 --log_dir /tmp/nospam-088320 pause
--- PASS: TestErrorSpam/pause (1.31s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.47s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-088320 --log_dir /tmp/nospam-088320 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-088320 --log_dir /tmp/nospam-088320 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-088320 --log_dir /tmp/nospam-088320 unpause
--- PASS: TestErrorSpam/unpause (1.47s)

                                                
                                    
x
+
TestErrorSpam/stop (1.5s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-088320 --log_dir /tmp/nospam-088320 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-088320 --log_dir /tmp/nospam-088320 stop: (1.371801692s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-088320 --log_dir /tmp/nospam-088320 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-088320 --log_dir /tmp/nospam-088320 stop
--- PASS: TestErrorSpam/stop (1.50s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1850: local sync path: /home/jenkins/minikube-integration/16573-71939/.minikube/files/etc/test/nested/copy/79153/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (96.69s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2229: (dbg) Run:  out/minikube-linux-amd64 start -p functional-420572 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=containerd
E0524 18:44:02.539103   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/addons-934336/client.crt: no such file or directory
E0524 18:44:02.544931   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/addons-934336/client.crt: no such file or directory
E0524 18:44:02.555230   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/addons-934336/client.crt: no such file or directory
E0524 18:44:02.575502   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/addons-934336/client.crt: no such file or directory
E0524 18:44:02.615762   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/addons-934336/client.crt: no such file or directory
E0524 18:44:02.696100   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/addons-934336/client.crt: no such file or directory
E0524 18:44:02.856490   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/addons-934336/client.crt: no such file or directory
E0524 18:44:03.177140   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/addons-934336/client.crt: no such file or directory
E0524 18:44:03.818068   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/addons-934336/client.crt: no such file or directory
E0524 18:44:05.098657   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/addons-934336/client.crt: no such file or directory
E0524 18:44:07.660423   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/addons-934336/client.crt: no such file or directory
E0524 18:44:12.780710   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/addons-934336/client.crt: no such file or directory
E0524 18:44:23.021640   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/addons-934336/client.crt: no such file or directory
functional_test.go:2229: (dbg) Done: out/minikube-linux-amd64 start -p functional-420572 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=containerd: (1m36.686255766s)
--- PASS: TestFunctional/serial/StartWithProxy (96.69s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6.01s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:654: (dbg) Run:  out/minikube-linux-amd64 start -p functional-420572 --alsologtostderr -v=8
functional_test.go:654: (dbg) Done: out/minikube-linux-amd64 start -p functional-420572 --alsologtostderr -v=8: (6.009367289s)
functional_test.go:658: soft start took 6.010101483s for "functional-420572" cluster.
--- PASS: TestFunctional/serial/SoftStart (6.01s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:676: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:691: (dbg) Run:  kubectl --context functional-420572 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.32s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1044: (dbg) Run:  out/minikube-linux-amd64 -p functional-420572 cache add registry.k8s.io/pause:3.1
functional_test.go:1044: (dbg) Done: out/minikube-linux-amd64 -p functional-420572 cache add registry.k8s.io/pause:3.1: (1.083389819s)
functional_test.go:1044: (dbg) Run:  out/minikube-linux-amd64 -p functional-420572 cache add registry.k8s.io/pause:3.3
functional_test.go:1044: (dbg) Done: out/minikube-linux-amd64 -p functional-420572 cache add registry.k8s.io/pause:3.3: (1.150704476s)
functional_test.go:1044: (dbg) Run:  out/minikube-linux-amd64 -p functional-420572 cache add registry.k8s.io/pause:latest
E0524 18:44:43.502786   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/addons-934336/client.crt: no such file or directory
functional_test.go:1044: (dbg) Done: out/minikube-linux-amd64 -p functional-420572 cache add registry.k8s.io/pause:latest: (1.081676339s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.32s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.5s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1072: (dbg) Run:  docker build -t minikube-local-cache-test:functional-420572 /tmp/TestFunctionalserialCacheCmdcacheadd_local3061558117/001
functional_test.go:1084: (dbg) Run:  out/minikube-linux-amd64 -p functional-420572 cache add minikube-local-cache-test:functional-420572
functional_test.go:1084: (dbg) Done: out/minikube-linux-amd64 -p functional-420572 cache add minikube-local-cache-test:functional-420572: (1.222713812s)
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-420572 cache delete minikube-local-cache-test:functional-420572
functional_test.go:1078: (dbg) Run:  docker rmi minikube-local-cache-test:functional-420572
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.50s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1097: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1105: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.21s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1119: (dbg) Run:  out/minikube-linux-amd64 -p functional-420572 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.21s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.93s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1142: (dbg) Run:  out/minikube-linux-amd64 -p functional-420572 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1148: (dbg) Run:  out/minikube-linux-amd64 -p functional-420572 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1148: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-420572 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (211.157872ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-420572 cache reload
functional_test.go:1153: (dbg) Done: out/minikube-linux-amd64 -p functional-420572 cache reload: (1.275612289s)
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-420572 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.93s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1167: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1167: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:711: (dbg) Run:  out/minikube-linux-amd64 -p functional-420572 kubectl -- --context functional-420572 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:736: (dbg) Run:  out/kubectl --context functional-420572 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.09s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (42.34s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:752: (dbg) Run:  out/minikube-linux-amd64 start -p functional-420572 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0524 18:45:24.463067   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/addons-934336/client.crt: no such file or directory
functional_test.go:752: (dbg) Done: out/minikube-linux-amd64 start -p functional-420572 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (42.335931718s)
functional_test.go:756: restart took 42.336037828s for "functional-420572" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (42.34s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:805: (dbg) Run:  kubectl --context functional-420572 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:820: etcd phase: Running
functional_test.go:830: etcd status: Ready
functional_test.go:820: kube-apiserver phase: Running
functional_test.go:830: kube-apiserver status: Ready
functional_test.go:820: kube-controller-manager phase: Running
functional_test.go:830: kube-controller-manager status: Ready
functional_test.go:820: kube-scheduler phase: Running
functional_test.go:830: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.21s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1231: (dbg) Run:  out/minikube-linux-amd64 -p functional-420572 logs
functional_test.go:1231: (dbg) Done: out/minikube-linux-amd64 -p functional-420572 logs: (1.207589823s)
--- PASS: TestFunctional/serial/LogsCmd (1.21s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.21s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1245: (dbg) Run:  out/minikube-linux-amd64 -p functional-420572 logs --file /tmp/TestFunctionalserialLogsFileCmd862571624/001/logs.txt
functional_test.go:1245: (dbg) Done: out/minikube-linux-amd64 -p functional-420572 logs --file /tmp/TestFunctionalserialLogsFileCmd862571624/001/logs.txt: (1.213528733s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.21s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1194: (dbg) Run:  out/minikube-linux-amd64 -p functional-420572 config unset cpus
functional_test.go:1194: (dbg) Run:  out/minikube-linux-amd64 -p functional-420572 config get cpus
functional_test.go:1194: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-420572 config get cpus: exit status 14 (42.814818ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1194: (dbg) Run:  out/minikube-linux-amd64 -p functional-420572 config set cpus 2
functional_test.go:1194: (dbg) Run:  out/minikube-linux-amd64 -p functional-420572 config get cpus
functional_test.go:1194: (dbg) Run:  out/minikube-linux-amd64 -p functional-420572 config unset cpus
functional_test.go:1194: (dbg) Run:  out/minikube-linux-amd64 -p functional-420572 config get cpus
functional_test.go:1194: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-420572 config get cpus: exit status 14 (42.592666ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (12.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:900: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-420572 --alsologtostderr -v=1]
functional_test.go:905: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-420572 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 85082: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (12.85s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:969: (dbg) Run:  out/minikube-linux-amd64 start -p functional-420572 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd
functional_test.go:969: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-420572 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd: exit status 23 (131.307015ms)

                                                
                                                
-- stdout --
	* [functional-420572] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=16573
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16573-71939/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16573-71939/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0524 18:45:47.419669   84955 out.go:296] Setting OutFile to fd 1 ...
	I0524 18:45:47.419781   84955 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0524 18:45:47.419789   84955 out.go:309] Setting ErrFile to fd 2...
	I0524 18:45:47.419794   84955 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0524 18:45:47.419929   84955 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16573-71939/.minikube/bin
	I0524 18:45:47.420419   84955 out.go:303] Setting JSON to false
	I0524 18:45:47.421287   84955 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":8882,"bootTime":1684945065,"procs":240,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1034-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0524 18:45:47.421345   84955 start.go:135] virtualization: kvm guest
	I0524 18:45:47.423753   84955 out.go:177] * [functional-420572] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)
	I0524 18:45:47.425430   84955 out.go:177]   - MINIKUBE_LOCATION=16573
	I0524 18:45:47.425377   84955 notify.go:220] Checking for updates...
	I0524 18:45:47.427147   84955 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0524 18:45:47.428780   84955 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16573-71939/kubeconfig
	I0524 18:45:47.430392   84955 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16573-71939/.minikube
	I0524 18:45:47.431875   84955 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0524 18:45:47.434430   84955 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0524 18:45:47.436373   84955 config.go:182] Loaded profile config "functional-420572": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.27.2
	I0524 18:45:47.436901   84955 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0524 18:45:47.436972   84955 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0524 18:45:47.452769   84955 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39525
	I0524 18:45:47.453264   84955 main.go:141] libmachine: () Calling .GetVersion
	I0524 18:45:47.453899   84955 main.go:141] libmachine: Using API Version  1
	I0524 18:45:47.453922   84955 main.go:141] libmachine: () Calling .SetConfigRaw
	I0524 18:45:47.454347   84955 main.go:141] libmachine: () Calling .GetMachineName
	I0524 18:45:47.454573   84955 main.go:141] libmachine: (functional-420572) Calling .DriverName
	I0524 18:45:47.454811   84955 driver.go:375] Setting default libvirt URI to qemu:///system
	I0524 18:45:47.455204   84955 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0524 18:45:47.455252   84955 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0524 18:45:47.470266   84955 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39929
	I0524 18:45:47.470678   84955 main.go:141] libmachine: () Calling .GetVersion
	I0524 18:45:47.471188   84955 main.go:141] libmachine: Using API Version  1
	I0524 18:45:47.471215   84955 main.go:141] libmachine: () Calling .SetConfigRaw
	I0524 18:45:47.471505   84955 main.go:141] libmachine: () Calling .GetMachineName
	I0524 18:45:47.471671   84955 main.go:141] libmachine: (functional-420572) Calling .DriverName
	I0524 18:45:47.502669   84955 out.go:177] * Using the kvm2 driver based on existing profile
	I0524 18:45:47.504160   84955 start.go:295] selected driver: kvm2
	I0524 18:45:47.504172   84955 start.go:870] validating driver "kvm2" against &{Name:functional-420572 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16501/minikube-v1.30.1-1684536668-16501-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684536746-16501@sha256:f5d93abf1d1cfb142a7cf0b58b24029595d621e5f943105b16c61199094d77de Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.27.2 ClusterName:functional-420572 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.50.139 Port:8441 KubernetesVersion:v1.27.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraD
isks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0524 18:45:47.504301   84955 start.go:881] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0524 18:45:47.506766   84955 out.go:177] 
	W0524 18:45:47.508261   84955 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0524 18:45:47.509692   84955 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:986: (dbg) Run:  out/minikube-linux-amd64 start -p functional-420572 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1015: (dbg) Run:  out/minikube-linux-amd64 start -p functional-420572 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd
functional_test.go:1015: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-420572 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd: exit status 23 (166.120838ms)

                                                
                                                
-- stdout --
	* [functional-420572] minikube v1.30.1 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=16573
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16573-71939/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16573-71939/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0524 18:45:43.910419   84488 out.go:296] Setting OutFile to fd 1 ...
	I0524 18:45:43.910535   84488 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0524 18:45:43.910544   84488 out.go:309] Setting ErrFile to fd 2...
	I0524 18:45:43.910550   84488 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0524 18:45:43.910783   84488 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16573-71939/.minikube/bin
	I0524 18:45:43.911504   84488 out.go:303] Setting JSON to false
	I0524 18:45:43.912588   84488 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":8879,"bootTime":1684945065,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1034-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0524 18:45:43.912667   84488 start.go:135] virtualization: kvm guest
	I0524 18:45:43.915619   84488 out.go:177] * [functional-420572] minikube v1.30.1 sur Ubuntu 20.04 (kvm/amd64)
	I0524 18:45:43.917274   84488 out.go:177]   - MINIKUBE_LOCATION=16573
	I0524 18:45:43.917280   84488 notify.go:220] Checking for updates...
	I0524 18:45:43.919753   84488 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0524 18:45:43.921629   84488 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16573-71939/kubeconfig
	I0524 18:45:43.923418   84488 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16573-71939/.minikube
	I0524 18:45:43.925231   84488 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0524 18:45:43.926830   84488 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0524 18:45:43.928869   84488 config.go:182] Loaded profile config "functional-420572": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.27.2
	I0524 18:45:43.929417   84488 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0524 18:45:43.929483   84488 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0524 18:45:43.951819   84488 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39571
	I0524 18:45:43.952381   84488 main.go:141] libmachine: () Calling .GetVersion
	I0524 18:45:43.953132   84488 main.go:141] libmachine: Using API Version  1
	I0524 18:45:43.953156   84488 main.go:141] libmachine: () Calling .SetConfigRaw
	I0524 18:45:43.953609   84488 main.go:141] libmachine: () Calling .GetMachineName
	I0524 18:45:43.953842   84488 main.go:141] libmachine: (functional-420572) Calling .DriverName
	I0524 18:45:43.954042   84488 driver.go:375] Setting default libvirt URI to qemu:///system
	I0524 18:45:43.954452   84488 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0524 18:45:43.954494   84488 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0524 18:45:43.977213   84488 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39055
	I0524 18:45:43.977717   84488 main.go:141] libmachine: () Calling .GetVersion
	I0524 18:45:43.978389   84488 main.go:141] libmachine: Using API Version  1
	I0524 18:45:43.978412   84488 main.go:141] libmachine: () Calling .SetConfigRaw
	I0524 18:45:43.978922   84488 main.go:141] libmachine: () Calling .GetMachineName
	I0524 18:45:43.979152   84488 main.go:141] libmachine: (functional-420572) Calling .DriverName
	I0524 18:45:44.014656   84488 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0524 18:45:44.016209   84488 start.go:295] selected driver: kvm2
	I0524 18:45:44.016227   84488 start.go:870] validating driver "kvm2" against &{Name:functional-420572 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16501/minikube-v1.30.1-1684536668-16501-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684536746-16501@sha256:f5d93abf1d1cfb142a7cf0b58b24029595d621e5f943105b16c61199094d77de Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.27.2 ClusterName:functional-420572 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.50.139 Port:8441 KubernetesVersion:v1.27.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraD
isks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0524 18:45:44.016384   84488 start.go:881] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0524 18:45:44.018728   84488 out.go:177] 
	W0524 18:45:44.020238   84488 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0524 18:45:44.021796   84488 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:849: (dbg) Run:  out/minikube-linux-amd64 -p functional-420572 status
functional_test.go:855: (dbg) Run:  out/minikube-linux-amd64 -p functional-420572 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:867: (dbg) Run:  out/minikube-linux-amd64 -p functional-420572 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.09s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-420572 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1633: (dbg) Run:  kubectl --context functional-420572 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1638: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-6fb669fc84-bmc8g" [4da3ec55-25f2-4879-aa16-0d34181a0b28] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-6fb669fc84-bmc8g" [4da3ec55-25f2-4879-aa16-0d34181a0b28] Running
functional_test.go:1638: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.032446469s
functional_test.go:1647: (dbg) Run:  out/minikube-linux-amd64 -p functional-420572 service hello-node-connect --url
functional_test.go:1653: found endpoint for hello-node-connect: http://192.168.50.139:31896
functional_test.go:1673: http://192.168.50.139:31896: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-6fb669fc84-bmc8g

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.50.139:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.50.139:31896
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (10.58s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1688: (dbg) Run:  out/minikube-linux-amd64 -p functional-420572 addons list
functional_test.go:1700: (dbg) Run:  out/minikube-linux-amd64 -p functional-420572 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (44.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [feab6b37-6447-4405-8522-81726e9df93b] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.01869034s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-420572 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-420572 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-420572 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-420572 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-420572 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [8bc56bab-daac-44d5-b86a-be56ad5deb93] Pending
helpers_test.go:344: "sp-pod" [8bc56bab-daac-44d5-b86a-be56ad5deb93] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [8bc56bab-daac-44d5-b86a-be56ad5deb93] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.01272904s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-420572 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-420572 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-420572 delete -f testdata/storage-provisioner/pod.yaml: (1.879632287s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-420572 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [002e1818-b96d-410b-92a1-e363dcb238c7] Pending
helpers_test.go:344: "sp-pod" [002e1818-b96d-410b-92a1-e363dcb238c7] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [002e1818-b96d-410b-92a1-e363dcb238c7] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 21.013136476s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-420572 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (44.40s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1723: (dbg) Run:  out/minikube-linux-amd64 -p functional-420572 ssh "echo hello"
functional_test.go:1740: (dbg) Run:  out/minikube-linux-amd64 -p functional-420572 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-420572 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-420572 ssh -n functional-420572 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-420572 cp functional-420572:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd665997572/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-420572 ssh -n functional-420572 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.89s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (27.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1788: (dbg) Run:  kubectl --context functional-420572 replace --force -f testdata/mysql.yaml
functional_test.go:1794: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-7db894d786-56rkf" [2c4ce507-0557-4d78-8016-e26198052bc5] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-7db894d786-56rkf" [2c4ce507-0557-4d78-8016-e26198052bc5] Running
functional_test.go:1794: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 22.007095841s
functional_test.go:1802: (dbg) Run:  kubectl --context functional-420572 exec mysql-7db894d786-56rkf -- mysql -ppassword -e "show databases;"
functional_test.go:1802: (dbg) Non-zero exit: kubectl --context functional-420572 exec mysql-7db894d786-56rkf -- mysql -ppassword -e "show databases;": exit status 1 (183.412215ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1802: (dbg) Run:  kubectl --context functional-420572 exec mysql-7db894d786-56rkf -- mysql -ppassword -e "show databases;"
functional_test.go:1802: (dbg) Non-zero exit: kubectl --context functional-420572 exec mysql-7db894d786-56rkf -- mysql -ppassword -e "show databases;": exit status 1 (153.112744ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1802: (dbg) Run:  kubectl --context functional-420572 exec mysql-7db894d786-56rkf -- mysql -ppassword -e "show databases;"
functional_test.go:1802: (dbg) Non-zero exit: kubectl --context functional-420572 exec mysql-7db894d786-56rkf -- mysql -ppassword -e "show databases;": exit status 1 (144.115166ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1802: (dbg) Run:  kubectl --context functional-420572 exec mysql-7db894d786-56rkf -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (27.03s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1924: Checking for existence of /etc/test/nested/copy/79153/hosts within VM
functional_test.go:1926: (dbg) Run:  out/minikube-linux-amd64 -p functional-420572 ssh "sudo cat /etc/test/nested/copy/79153/hosts"
functional_test.go:1931: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1967: Checking for existence of /etc/ssl/certs/79153.pem within VM
functional_test.go:1968: (dbg) Run:  out/minikube-linux-amd64 -p functional-420572 ssh "sudo cat /etc/ssl/certs/79153.pem"
functional_test.go:1967: Checking for existence of /usr/share/ca-certificates/79153.pem within VM
functional_test.go:1968: (dbg) Run:  out/minikube-linux-amd64 -p functional-420572 ssh "sudo cat /usr/share/ca-certificates/79153.pem"
functional_test.go:1967: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1968: (dbg) Run:  out/minikube-linux-amd64 -p functional-420572 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1994: Checking for existence of /etc/ssl/certs/791532.pem within VM
functional_test.go:1995: (dbg) Run:  out/minikube-linux-amd64 -p functional-420572 ssh "sudo cat /etc/ssl/certs/791532.pem"
functional_test.go:1994: Checking for existence of /usr/share/ca-certificates/791532.pem within VM
functional_test.go:1995: (dbg) Run:  out/minikube-linux-amd64 -p functional-420572 ssh "sudo cat /usr/share/ca-certificates/791532.pem"
functional_test.go:1994: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1995: (dbg) Run:  out/minikube-linux-amd64 -p functional-420572 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.46s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:217: (dbg) Run:  kubectl --context functional-420572 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2022: (dbg) Run:  out/minikube-linux-amd64 -p functional-420572 ssh "sudo systemctl is-active docker"
functional_test.go:2022: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-420572 ssh "sudo systemctl is-active docker": exit status 1 (234.606212ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2022: (dbg) Run:  out/minikube-linux-amd64 -p functional-420572 ssh "sudo systemctl is-active crio"
functional_test.go:2022: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-420572 ssh "sudo systemctl is-active crio": exit status 1 (207.889419ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2283: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1268: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1273: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1308: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1313: Took "279.842508ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1322: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1327: Took "51.717847ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (11.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-420572 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1443: (dbg) Run:  kubectl --context functional-420572 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1448: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-775766b4cc-fdxx6" [f9231652-51db-4b38-8364-9e9f425054d3] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-775766b4cc-fdxx6" [f9231652-51db-4b38-8364-9e9f425054d3] Running
functional_test.go:1448: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 11.012025835s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (11.26s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1359: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1364: Took "213.914714ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1372: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1377: Took "41.561284ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2251: (dbg) Run:  out/minikube-linux-amd64 -p functional-420572 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2265: (dbg) Run:  out/minikube-linux-amd64 -p functional-420572 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:259: (dbg) Run:  out/minikube-linux-amd64 -p functional-420572 image ls --format short --alsologtostderr
functional_test.go:264: (dbg) Stdout: out/minikube-linux-amd64 -p functional-420572 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.27.2
registry.k8s.io/kube-proxy:v1.27.2
registry.k8s.io/kube-controller-manager:v1.27.2
registry.k8s.io/kube-apiserver:v1.27.2
registry.k8s.io/etcd:3.5.7-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-420572
docker.io/library/nginx:latest
docker.io/library/minikube-local-cache-test:functional-420572
docker.io/kindest/kindnetd:v20230511-dc714da8
functional_test.go:267: (dbg) Stderr: out/minikube-linux-amd64 -p functional-420572 image ls --format short --alsologtostderr:
I0524 18:45:58.511841   86113 out.go:296] Setting OutFile to fd 1 ...
I0524 18:45:58.511979   86113 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0524 18:45:58.511987   86113 out.go:309] Setting ErrFile to fd 2...
I0524 18:45:58.511992   86113 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0524 18:45:58.512093   86113 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16573-71939/.minikube/bin
I0524 18:45:58.512592   86113 config.go:182] Loaded profile config "functional-420572": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.27.2
I0524 18:45:58.512692   86113 config.go:182] Loaded profile config "functional-420572": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.27.2
I0524 18:45:58.513023   86113 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0524 18:45:58.513069   86113 main.go:141] libmachine: Launching plugin server for driver kvm2
I0524 18:45:58.527849   86113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40959
I0524 18:45:58.528319   86113 main.go:141] libmachine: () Calling .GetVersion
I0524 18:45:58.528936   86113 main.go:141] libmachine: Using API Version  1
I0524 18:45:58.528960   86113 main.go:141] libmachine: () Calling .SetConfigRaw
I0524 18:45:58.529312   86113 main.go:141] libmachine: () Calling .GetMachineName
I0524 18:45:58.529515   86113 main.go:141] libmachine: (functional-420572) Calling .GetState
I0524 18:45:58.531206   86113 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0524 18:45:58.531259   86113 main.go:141] libmachine: Launching plugin server for driver kvm2
I0524 18:45:58.548087   86113 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43559
I0524 18:45:58.548808   86113 main.go:141] libmachine: () Calling .GetVersion
I0524 18:45:58.550275   86113 main.go:141] libmachine: Using API Version  1
I0524 18:45:58.550296   86113 main.go:141] libmachine: () Calling .SetConfigRaw
I0524 18:45:58.550671   86113 main.go:141] libmachine: () Calling .GetMachineName
I0524 18:45:58.550898   86113 main.go:141] libmachine: (functional-420572) Calling .DriverName
I0524 18:45:58.551131   86113 ssh_runner.go:195] Run: systemctl --version
I0524 18:45:58.551159   86113 main.go:141] libmachine: (functional-420572) Calling .GetSSHHostname
I0524 18:45:58.554087   86113 main.go:141] libmachine: (functional-420572) DBG | domain functional-420572 has defined MAC address 52:54:00:13:04:14 in network mk-functional-420572
I0524 18:45:58.554565   86113 main.go:141] libmachine: (functional-420572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:04:14", ip: ""} in network mk-functional-420572: {Iface:virbr1 ExpiryTime:2023-05-24 19:43:13 +0000 UTC Type:0 Mac:52:54:00:13:04:14 Iaid: IPaddr:192.168.50.139 Prefix:24 Hostname:functional-420572 Clientid:01:52:54:00:13:04:14}
I0524 18:45:58.554596   86113 main.go:141] libmachine: (functional-420572) DBG | domain functional-420572 has defined IP address 192.168.50.139 and MAC address 52:54:00:13:04:14 in network mk-functional-420572
I0524 18:45:58.554747   86113 main.go:141] libmachine: (functional-420572) Calling .GetSSHPort
I0524 18:45:58.554926   86113 main.go:141] libmachine: (functional-420572) Calling .GetSSHKeyPath
I0524 18:45:58.555123   86113 main.go:141] libmachine: (functional-420572) Calling .GetSSHUsername
I0524 18:45:58.555291   86113 sshutil.go:53] new ssh client: &{IP:192.168.50.139 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16573-71939/.minikube/machines/functional-420572/id_rsa Username:docker}
I0524 18:45:58.655299   86113 ssh_runner.go:195] Run: sudo crictl images --output json
I0524 18:45:58.684829   86113 main.go:141] libmachine: Making call to close driver server
I0524 18:45:58.684840   86113 main.go:141] libmachine: (functional-420572) Calling .Close
I0524 18:45:58.685121   86113 main.go:141] libmachine: Successfully made call to close driver server
I0524 18:45:58.685146   86113 main.go:141] libmachine: Making call to close connection to plugin binary
I0524 18:45:58.685148   86113 main.go:141] libmachine: (functional-420572) DBG | Closing plugin on server side
I0524 18:45:58.685155   86113 main.go:141] libmachine: Making call to close driver server
I0524 18:45:58.685180   86113 main.go:141] libmachine: (functional-420572) Calling .Close
I0524 18:45:58.685451   86113 main.go:141] libmachine: Successfully made call to close driver server
I0524 18:45:58.685481   86113 main.go:141] libmachine: Making call to close connection to plugin binary
I0524 18:45:58.685458   86113 main.go:141] libmachine: (functional-420572) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:259: (dbg) Run:  out/minikube-linux-amd64 -p functional-420572 image ls --format table --alsologtostderr
functional_test.go:264: (dbg) Stdout: out/minikube-linux-amd64 -p functional-420572 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| gcr.io/google-containers/addon-resizer      | functional-420572  | sha256:ffd4cf | 10.8MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:6e38f4 | 9.06MB |
| registry.k8s.io/etcd                        | 3.5.7-0            | sha256:86b6af | 102MB  |
| registry.k8s.io/kube-proxy                  | v1.27.2            | sha256:b8aa50 | 23.9MB |
| registry.k8s.io/pause                       | 3.1                | sha256:da86e6 | 315kB  |
| registry.k8s.io/pause                       | 3.3                | sha256:0184c1 | 298kB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:56cc51 | 2.4MB  |
| registry.k8s.io/echoserver                  | 1.8                | sha256:82e4c8 | 46.2MB |
| docker.io/kindest/kindnetd                  | v20230511-dc714da8 | sha256:b0b1fa | 27.7MB |
| docker.io/library/minikube-local-cache-test | functional-420572  | sha256:b9ccd8 | 1kB    |
| registry.k8s.io/coredns/coredns             | v1.10.1            | sha256:ead0a4 | 16.2MB |
| registry.k8s.io/pause                       | 3.9                | sha256:e6f181 | 322kB  |
| registry.k8s.io/pause                       | latest             | sha256:350b16 | 72.3kB |
| docker.io/library/nginx                     | latest             | sha256:a7be61 | 57MB   |
| registry.k8s.io/kube-apiserver              | v1.27.2            | sha256:c5b13e | 33.4MB |
| registry.k8s.io/kube-controller-manager     | v1.27.2            | sha256:ac2b74 | 31MB   |
| registry.k8s.io/kube-scheduler              | v1.27.2            | sha256:89e70d | 18.2MB |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:267: (dbg) Stderr: out/minikube-linux-amd64 -p functional-420572 image ls --format table --alsologtostderr:
I0524 18:46:00.902020   86238 out.go:296] Setting OutFile to fd 1 ...
I0524 18:46:00.902131   86238 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0524 18:46:00.902140   86238 out.go:309] Setting ErrFile to fd 2...
I0524 18:46:00.902144   86238 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0524 18:46:00.902274   86238 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16573-71939/.minikube/bin
I0524 18:46:00.902820   86238 config.go:182] Loaded profile config "functional-420572": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.27.2
I0524 18:46:00.902928   86238 config.go:182] Loaded profile config "functional-420572": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.27.2
I0524 18:46:00.903295   86238 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0524 18:46:00.903350   86238 main.go:141] libmachine: Launching plugin server for driver kvm2
I0524 18:46:00.918186   86238 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45171
I0524 18:46:00.918616   86238 main.go:141] libmachine: () Calling .GetVersion
I0524 18:46:00.919170   86238 main.go:141] libmachine: Using API Version  1
I0524 18:46:00.919191   86238 main.go:141] libmachine: () Calling .SetConfigRaw
I0524 18:46:00.919561   86238 main.go:141] libmachine: () Calling .GetMachineName
I0524 18:46:00.919795   86238 main.go:141] libmachine: (functional-420572) Calling .GetState
I0524 18:46:00.921844   86238 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0524 18:46:00.921894   86238 main.go:141] libmachine: Launching plugin server for driver kvm2
I0524 18:46:00.937150   86238 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37963
I0524 18:46:00.937609   86238 main.go:141] libmachine: () Calling .GetVersion
I0524 18:46:00.938087   86238 main.go:141] libmachine: Using API Version  1
I0524 18:46:00.938111   86238 main.go:141] libmachine: () Calling .SetConfigRaw
I0524 18:46:00.938417   86238 main.go:141] libmachine: () Calling .GetMachineName
I0524 18:46:00.938592   86238 main.go:141] libmachine: (functional-420572) Calling .DriverName
I0524 18:46:00.938780   86238 ssh_runner.go:195] Run: systemctl --version
I0524 18:46:00.938806   86238 main.go:141] libmachine: (functional-420572) Calling .GetSSHHostname
I0524 18:46:00.941193   86238 main.go:141] libmachine: (functional-420572) DBG | domain functional-420572 has defined MAC address 52:54:00:13:04:14 in network mk-functional-420572
I0524 18:46:00.941627   86238 main.go:141] libmachine: (functional-420572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:04:14", ip: ""} in network mk-functional-420572: {Iface:virbr1 ExpiryTime:2023-05-24 19:43:13 +0000 UTC Type:0 Mac:52:54:00:13:04:14 Iaid: IPaddr:192.168.50.139 Prefix:24 Hostname:functional-420572 Clientid:01:52:54:00:13:04:14}
I0524 18:46:00.941648   86238 main.go:141] libmachine: (functional-420572) DBG | domain functional-420572 has defined IP address 192.168.50.139 and MAC address 52:54:00:13:04:14 in network mk-functional-420572
I0524 18:46:00.941877   86238 main.go:141] libmachine: (functional-420572) Calling .GetSSHPort
I0524 18:46:00.942059   86238 main.go:141] libmachine: (functional-420572) Calling .GetSSHKeyPath
I0524 18:46:00.942226   86238 main.go:141] libmachine: (functional-420572) Calling .GetSSHUsername
I0524 18:46:00.942400   86238 sshutil.go:53] new ssh client: &{IP:192.168.50.139 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16573-71939/.minikube/machines/functional-420572/id_rsa Username:docker}
I0524 18:46:01.098906   86238 ssh_runner.go:195] Run: sudo crictl images --output json
I0524 18:46:01.229980   86238 main.go:141] libmachine: Making call to close driver server
I0524 18:46:01.230000   86238 main.go:141] libmachine: (functional-420572) Calling .Close
I0524 18:46:01.230295   86238 main.go:141] libmachine: Successfully made call to close driver server
I0524 18:46:01.230320   86238 main.go:141] libmachine: Making call to close connection to plugin binary
I0524 18:46:01.230393   86238 main.go:141] libmachine: (functional-420572) DBG | Closing plugin on server side
I0524 18:46:01.230413   86238 main.go:141] libmachine: Making call to close driver server
I0524 18:46:01.230430   86238 main.go:141] libmachine: (functional-420572) Calling .Close
I0524 18:46:01.230643   86238 main.go:141] libmachine: Successfully made call to close driver server
I0524 18:46:01.230659   86238 main.go:141] libmachine: (functional-420572) DBG | Closing plugin on server side
I0524 18:46:01.230673   86238 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:259: (dbg) Run:  out/minikube-linux-amd64 -p functional-420572 image ls --format json --alsologtostderr
functional_test.go:264: (dbg) Stdout: out/minikube-linux-amd64 -p functional-420572 image ls --format json --alsologtostderr:
[{"id":"sha256:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"75788960"},{"id":"sha256:ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-420572"],"size":"10823156"},{"id":"sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"315399"},{"id":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"321520"},{"id":"sha256:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d2
9f356092ce206e98765c"],"repoTags":[],"size":"19746404"},{"id":"sha256:b9ccd8a6dfd5b47dc615e0689255f71a893c91431ad1feab064b97fd65b608c7","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-420572"],"size":"1005"},{"id":"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"16190758"},{"id":"sha256:c5b13e4f7806de1dcc1c1146c7ec7c89d77ac340c3695118cf84bb0b5f989370","repoDigests":["registry.k8s.io/kube-apiserver@sha256:94e48585629fde3c1d06c6ae8f62885d3052f12a1072ffd97611296525eff5b9"],"repoTags":["registry.k8s.io/kube-apiserver:v1.27.2"],"size":"33362711"},{"id":"sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"297686"},{"id":"sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a
5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"72306"},{"id":"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"2395207"},{"id":"sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"46237695"},{"id":"sha256:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681","repoDigests":["registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83"],"repoTags":["registry.k8s.io/etcd:3.5.7-0"],"size":"101639218"},{"id":"sha256:ac2b7465ebba99362b6ea11fca1357b90ae6854b4464a25c55e6eef622103e12","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:b0990ef7c
9ce9edd0f57355a7e4cb43a71e864bfd2cd55bc68e4998e00213b56"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.27.2"],"size":"30971340"},{"id":"sha256:b8aa50768fd675409bd7edcc4f6a18290dad5d9c2515aad12d32174dc13e7dee","repoDigests":["registry.k8s.io/kube-proxy@sha256:1e4f13f5f5c215813fb9c9c6f56da1c0354363f2a69bd12732658f79d585864f"],"repoTags":["registry.k8s.io/kube-proxy:v1.27.2"],"size":"23895334"},{"id":"sha256:b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da","repoDigests":["docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974"],"repoTags":["docker.io/kindest/kindnetd:v20230511-dc714da8"],"size":"27731571"},{"id":"sha256:a7be6198544f09a75b26e6376459b47c5b9972e7aa742af9f356b540fe852cd4","repoDigests":["docker.io/library/nginx@sha256:f5747a42e3adcb3168049d63278d7251d91185bb5111d2563d58729a5c9179b0"],"repoTags":["docker.io/library/nginx:latest"],"size":"57002949"},{"id":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","r
epoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"9058936"},{"id":"sha256:89e70da428d29a45b89f5daa196229ceddea947f4708b3a61669e0069cb6b8b0","repoDigests":["registry.k8s.io/kube-scheduler@sha256:89e8c591cc58bd0e8651dddee3de290399b1ae5ad14779afe84779083fe05177"],"repoTags":["registry.k8s.io/kube-scheduler:v1.27.2"],"size":"18230943"}]
functional_test.go:267: (dbg) Stderr: out/minikube-linux-amd64 -p functional-420572 image ls --format json --alsologtostderr:
I0524 18:46:00.544635   86215 out.go:296] Setting OutFile to fd 1 ...
I0524 18:46:00.544791   86215 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0524 18:46:00.544803   86215 out.go:309] Setting ErrFile to fd 2...
I0524 18:46:00.544809   86215 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0524 18:46:00.544971   86215 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16573-71939/.minikube/bin
I0524 18:46:00.545748   86215 config.go:182] Loaded profile config "functional-420572": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.27.2
I0524 18:46:00.545882   86215 config.go:182] Loaded profile config "functional-420572": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.27.2
I0524 18:46:00.546382   86215 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0524 18:46:00.546448   86215 main.go:141] libmachine: Launching plugin server for driver kvm2
I0524 18:46:00.561331   86215 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33905
I0524 18:46:00.561778   86215 main.go:141] libmachine: () Calling .GetVersion
I0524 18:46:00.562461   86215 main.go:141] libmachine: Using API Version  1
I0524 18:46:00.562497   86215 main.go:141] libmachine: () Calling .SetConfigRaw
I0524 18:46:00.562861   86215 main.go:141] libmachine: () Calling .GetMachineName
I0524 18:46:00.563083   86215 main.go:141] libmachine: (functional-420572) Calling .GetState
I0524 18:46:00.565038   86215 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0524 18:46:00.565084   86215 main.go:141] libmachine: Launching plugin server for driver kvm2
I0524 18:46:00.579284   86215 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33147
I0524 18:46:00.579681   86215 main.go:141] libmachine: () Calling .GetVersion
I0524 18:46:00.580151   86215 main.go:141] libmachine: Using API Version  1
I0524 18:46:00.580180   86215 main.go:141] libmachine: () Calling .SetConfigRaw
I0524 18:46:00.580500   86215 main.go:141] libmachine: () Calling .GetMachineName
I0524 18:46:00.580667   86215 main.go:141] libmachine: (functional-420572) Calling .DriverName
I0524 18:46:00.580856   86215 ssh_runner.go:195] Run: systemctl --version
I0524 18:46:00.580881   86215 main.go:141] libmachine: (functional-420572) Calling .GetSSHHostname
I0524 18:46:00.583353   86215 main.go:141] libmachine: (functional-420572) DBG | domain functional-420572 has defined MAC address 52:54:00:13:04:14 in network mk-functional-420572
I0524 18:46:00.583798   86215 main.go:141] libmachine: (functional-420572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:04:14", ip: ""} in network mk-functional-420572: {Iface:virbr1 ExpiryTime:2023-05-24 19:43:13 +0000 UTC Type:0 Mac:52:54:00:13:04:14 Iaid: IPaddr:192.168.50.139 Prefix:24 Hostname:functional-420572 Clientid:01:52:54:00:13:04:14}
I0524 18:46:00.583823   86215 main.go:141] libmachine: (functional-420572) DBG | domain functional-420572 has defined IP address 192.168.50.139 and MAC address 52:54:00:13:04:14 in network mk-functional-420572
I0524 18:46:00.583957   86215 main.go:141] libmachine: (functional-420572) Calling .GetSSHPort
I0524 18:46:00.584135   86215 main.go:141] libmachine: (functional-420572) Calling .GetSSHKeyPath
I0524 18:46:00.584315   86215 main.go:141] libmachine: (functional-420572) Calling .GetSSHUsername
I0524 18:46:00.584478   86215 sshutil.go:53] new ssh client: &{IP:192.168.50.139 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16573-71939/.minikube/machines/functional-420572/id_rsa Username:docker}
I0524 18:46:00.712041   86215 ssh_runner.go:195] Run: sudo crictl images --output json
I0524 18:46:00.845966   86215 main.go:141] libmachine: Making call to close driver server
I0524 18:46:00.845983   86215 main.go:141] libmachine: (functional-420572) Calling .Close
I0524 18:46:00.846264   86215 main.go:141] libmachine: Successfully made call to close driver server
I0524 18:46:00.846294   86215 main.go:141] libmachine: Making call to close connection to plugin binary
I0524 18:46:00.846307   86215 main.go:141] libmachine: Making call to close driver server
I0524 18:46:00.846316   86215 main.go:141] libmachine: (functional-420572) Calling .Close
I0524 18:46:00.846546   86215 main.go:141] libmachine: Successfully made call to close driver server
I0524 18:46:00.846557   86215 main.go:141] libmachine: Making call to close connection to plugin binary
I0524 18:46:00.846599   86215 main.go:141] libmachine: (functional-420572) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:259: (dbg) Run:  out/minikube-linux-amd64 -p functional-420572 image ls --format yaml --alsologtostderr
functional_test.go:264: (dbg) Stdout: out/minikube-linux-amd64 -p functional-420572 image ls --format yaml --alsologtostderr:
- id: sha256:c5b13e4f7806de1dcc1c1146c7ec7c89d77ac340c3695118cf84bb0b5f989370
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:94e48585629fde3c1d06c6ae8f62885d3052f12a1072ffd97611296525eff5b9
repoTags:
- registry.k8s.io/kube-apiserver:v1.27.2
size: "33362711"
- id: sha256:89e70da428d29a45b89f5daa196229ceddea947f4708b3a61669e0069cb6b8b0
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:89e8c591cc58bd0e8651dddee3de290399b1ae5ad14779afe84779083fe05177
repoTags:
- registry.k8s.io/kube-scheduler:v1.27.2
size: "18230943"
- id: sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "315399"
- id: sha256:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681
repoDigests:
- registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83
repoTags:
- registry.k8s.io/etcd:3.5.7-0
size: "101639218"
- id: sha256:b9ccd8a6dfd5b47dc615e0689255f71a893c91431ad1feab064b97fd65b608c7
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-420572
size: "1005"
- id: sha256:a7be6198544f09a75b26e6376459b47c5b9972e7aa742af9f356b540fe852cd4
repoDigests:
- docker.io/library/nginx@sha256:f5747a42e3adcb3168049d63278d7251d91185bb5111d2563d58729a5c9179b0
repoTags:
- docker.io/library/nginx:latest
size: "57002949"
- id: sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "2395207"
- id: sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "9058936"
- id: sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "16190758"
- id: sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
repoTags:
- registry.k8s.io/pause:3.9
size: "321520"
- id: sha256:b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da
repoDigests:
- docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974
repoTags:
- docker.io/kindest/kindnetd:v20230511-dc714da8
size: "27731571"
- id: sha256:ac2b7465ebba99362b6ea11fca1357b90ae6854b4464a25c55e6eef622103e12
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:b0990ef7c9ce9edd0f57355a7e4cb43a71e864bfd2cd55bc68e4998e00213b56
repoTags:
- registry.k8s.io/kube-controller-manager:v1.27.2
size: "30971340"
- id: sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "72306"
- id: sha256:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "75788960"
- id: sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "46237695"
- id: sha256:b8aa50768fd675409bd7edcc4f6a18290dad5d9c2515aad12d32174dc13e7dee
repoDigests:
- registry.k8s.io/kube-proxy@sha256:1e4f13f5f5c215813fb9c9c6f56da1c0354363f2a69bd12732658f79d585864f
repoTags:
- registry.k8s.io/kube-proxy:v1.27.2
size: "23895334"
- id: sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "297686"
- id: sha256:ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-420572
size: "10823156"

                                                
                                                
functional_test.go:267: (dbg) Stderr: out/minikube-linux-amd64 -p functional-420572 image ls --format yaml --alsologtostderr:
I0524 18:45:58.732257   86137 out.go:296] Setting OutFile to fd 1 ...
I0524 18:45:58.732361   86137 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0524 18:45:58.732370   86137 out.go:309] Setting ErrFile to fd 2...
I0524 18:45:58.732375   86137 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0524 18:45:58.732494   86137 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16573-71939/.minikube/bin
I0524 18:45:58.733052   86137 config.go:182] Loaded profile config "functional-420572": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.27.2
I0524 18:45:58.733148   86137 config.go:182] Loaded profile config "functional-420572": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.27.2
I0524 18:45:58.733513   86137 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0524 18:45:58.733566   86137 main.go:141] libmachine: Launching plugin server for driver kvm2
I0524 18:45:58.750002   86137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35115
I0524 18:45:58.750647   86137 main.go:141] libmachine: () Calling .GetVersion
I0524 18:45:58.751239   86137 main.go:141] libmachine: Using API Version  1
I0524 18:45:58.751262   86137 main.go:141] libmachine: () Calling .SetConfigRaw
I0524 18:45:58.751723   86137 main.go:141] libmachine: () Calling .GetMachineName
I0524 18:45:58.751943   86137 main.go:141] libmachine: (functional-420572) Calling .GetState
I0524 18:45:58.754212   86137 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0524 18:45:58.754261   86137 main.go:141] libmachine: Launching plugin server for driver kvm2
I0524 18:45:58.768127   86137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45361
I0524 18:45:58.768516   86137 main.go:141] libmachine: () Calling .GetVersion
I0524 18:45:58.769012   86137 main.go:141] libmachine: Using API Version  1
I0524 18:45:58.769039   86137 main.go:141] libmachine: () Calling .SetConfigRaw
I0524 18:45:58.769447   86137 main.go:141] libmachine: () Calling .GetMachineName
I0524 18:45:58.769641   86137 main.go:141] libmachine: (functional-420572) Calling .DriverName
I0524 18:45:58.769864   86137 ssh_runner.go:195] Run: systemctl --version
I0524 18:45:58.769905   86137 main.go:141] libmachine: (functional-420572) Calling .GetSSHHostname
I0524 18:45:58.772493   86137 main.go:141] libmachine: (functional-420572) DBG | domain functional-420572 has defined MAC address 52:54:00:13:04:14 in network mk-functional-420572
I0524 18:45:58.772909   86137 main.go:141] libmachine: (functional-420572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:04:14", ip: ""} in network mk-functional-420572: {Iface:virbr1 ExpiryTime:2023-05-24 19:43:13 +0000 UTC Type:0 Mac:52:54:00:13:04:14 Iaid: IPaddr:192.168.50.139 Prefix:24 Hostname:functional-420572 Clientid:01:52:54:00:13:04:14}
I0524 18:45:58.772948   86137 main.go:141] libmachine: (functional-420572) DBG | domain functional-420572 has defined IP address 192.168.50.139 and MAC address 52:54:00:13:04:14 in network mk-functional-420572
I0524 18:45:58.773013   86137 main.go:141] libmachine: (functional-420572) Calling .GetSSHPort
I0524 18:45:58.773176   86137 main.go:141] libmachine: (functional-420572) Calling .GetSSHKeyPath
I0524 18:45:58.773338   86137 main.go:141] libmachine: (functional-420572) Calling .GetSSHUsername
I0524 18:45:58.773557   86137 sshutil.go:53] new ssh client: &{IP:192.168.50.139 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16573-71939/.minikube/machines/functional-420572/id_rsa Username:docker}
I0524 18:45:58.868039   86137 ssh_runner.go:195] Run: sudo crictl images --output json
I0524 18:45:58.910122   86137 main.go:141] libmachine: Making call to close driver server
I0524 18:45:58.910136   86137 main.go:141] libmachine: (functional-420572) Calling .Close
I0524 18:45:58.910440   86137 main.go:141] libmachine: Successfully made call to close driver server
I0524 18:45:58.910473   86137 main.go:141] libmachine: Making call to close connection to plugin binary
I0524 18:45:58.910486   86137 main.go:141] libmachine: Making call to close driver server
I0524 18:45:58.910498   86137 main.go:141] libmachine: (functional-420572) Calling .Close
I0524 18:45:58.910764   86137 main.go:141] libmachine: (functional-420572) DBG | Closing plugin on server side
I0524 18:45:58.910810   86137 main.go:141] libmachine: Successfully made call to close driver server
I0524 18:45:58.910826   86137 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:306: (dbg) Run:  out/minikube-linux-amd64 -p functional-420572 ssh pgrep buildkitd
functional_test.go:306: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-420572 ssh pgrep buildkitd: exit status 1 (195.078876ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:313: (dbg) Run:  out/minikube-linux-amd64 -p functional-420572 image build -t localhost/my-image:functional-420572 testdata/build --alsologtostderr
2023/05/24 18:46:00 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:313: (dbg) Done: out/minikube-linux-amd64 -p functional-420572 image build -t localhost/my-image:functional-420572 testdata/build --alsologtostderr: (3.638030104s)
functional_test.go:321: (dbg) Stderr: out/minikube-linux-amd64 -p functional-420572 image build -t localhost/my-image:functional-420572 testdata/build --alsologtostderr:
I0524 18:45:59.151090   86190 out.go:296] Setting OutFile to fd 1 ...
I0524 18:45:59.151215   86190 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0524 18:45:59.151223   86190 out.go:309] Setting ErrFile to fd 2...
I0524 18:45:59.151227   86190 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0524 18:45:59.151353   86190 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16573-71939/.minikube/bin
I0524 18:45:59.151845   86190 config.go:182] Loaded profile config "functional-420572": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.27.2
I0524 18:45:59.152361   86190 config.go:182] Loaded profile config "functional-420572": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.27.2
I0524 18:45:59.152704   86190 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0524 18:45:59.152758   86190 main.go:141] libmachine: Launching plugin server for driver kvm2
I0524 18:45:59.167072   86190 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43043
I0524 18:45:59.167552   86190 main.go:141] libmachine: () Calling .GetVersion
I0524 18:45:59.168113   86190 main.go:141] libmachine: Using API Version  1
I0524 18:45:59.168134   86190 main.go:141] libmachine: () Calling .SetConfigRaw
I0524 18:45:59.168434   86190 main.go:141] libmachine: () Calling .GetMachineName
I0524 18:45:59.168611   86190 main.go:141] libmachine: (functional-420572) Calling .GetState
I0524 18:45:59.170865   86190 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0524 18:45:59.170935   86190 main.go:141] libmachine: Launching plugin server for driver kvm2
I0524 18:45:59.184893   86190 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34089
I0524 18:45:59.185241   86190 main.go:141] libmachine: () Calling .GetVersion
I0524 18:45:59.185658   86190 main.go:141] libmachine: Using API Version  1
I0524 18:45:59.185704   86190 main.go:141] libmachine: () Calling .SetConfigRaw
I0524 18:45:59.185996   86190 main.go:141] libmachine: () Calling .GetMachineName
I0524 18:45:59.186165   86190 main.go:141] libmachine: (functional-420572) Calling .DriverName
I0524 18:45:59.186375   86190 ssh_runner.go:195] Run: systemctl --version
I0524 18:45:59.186399   86190 main.go:141] libmachine: (functional-420572) Calling .GetSSHHostname
I0524 18:45:59.189001   86190 main.go:141] libmachine: (functional-420572) DBG | domain functional-420572 has defined MAC address 52:54:00:13:04:14 in network mk-functional-420572
I0524 18:45:59.189404   86190 main.go:141] libmachine: (functional-420572) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:04:14", ip: ""} in network mk-functional-420572: {Iface:virbr1 ExpiryTime:2023-05-24 19:43:13 +0000 UTC Type:0 Mac:52:54:00:13:04:14 Iaid: IPaddr:192.168.50.139 Prefix:24 Hostname:functional-420572 Clientid:01:52:54:00:13:04:14}
I0524 18:45:59.189435   86190 main.go:141] libmachine: (functional-420572) DBG | domain functional-420572 has defined IP address 192.168.50.139 and MAC address 52:54:00:13:04:14 in network mk-functional-420572
I0524 18:45:59.189624   86190 main.go:141] libmachine: (functional-420572) Calling .GetSSHPort
I0524 18:45:59.189790   86190 main.go:141] libmachine: (functional-420572) Calling .GetSSHKeyPath
I0524 18:45:59.189944   86190 main.go:141] libmachine: (functional-420572) Calling .GetSSHUsername
I0524 18:45:59.190069   86190 sshutil.go:53] new ssh client: &{IP:192.168.50.139 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16573-71939/.minikube/machines/functional-420572/id_rsa Username:docker}
I0524 18:45:59.289284   86190 build_images.go:151] Building image from path: /tmp/build.251885758.tar
I0524 18:45:59.289360   86190 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0524 18:45:59.300573   86190 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.251885758.tar
I0524 18:45:59.305244   86190 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.251885758.tar: stat -c "%s %y" /var/lib/minikube/build/build.251885758.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.251885758.tar': No such file or directory
I0524 18:45:59.305272   86190 ssh_runner.go:362] scp /tmp/build.251885758.tar --> /var/lib/minikube/build/build.251885758.tar (3072 bytes)
I0524 18:45:59.331206   86190 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.251885758
I0524 18:45:59.340256   86190 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.251885758 -xf /var/lib/minikube/build/build.251885758.tar
I0524 18:45:59.349119   86190 containerd.go:378] Building image: /var/lib/minikube/build/build.251885758
I0524 18:45:59.349155   86190 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.251885758 --local dockerfile=/var/lib/minikube/build/build.251885758 --output type=image,name=localhost/my-image:functional-420572
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 29B
#1 transferring dockerfile: 97B done
#1 DONE 0.1s

                                                
                                                
#2 [internal] load .dockerignore
#2 transferring context: 2B done
#2 DONE 0.1s

                                                
                                                
#3 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#3 DONE 0.3s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.1s done
#5 DONE 0.2s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.1s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.1s done
#5 DONE 0.4s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 1.7s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.3s

                                                
                                                
#8 exporting to image
#8 exporting layers
#8 exporting layers 0.2s done
#8 exporting manifest sha256:6b73e697afcc1a4b1284dd821ae84c61034a88609015c7c967d38982f2cfce9d 0.0s done
#8 exporting config sha256:14ed023686ad05d61f0a6d47bf98a3027328d3dd31ebed27712426978d4fc38d 0.0s done
#8 naming to localhost/my-image:functional-420572 done
#8 DONE 0.2s
I0524 18:46:02.717193   86190 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.251885758 --local dockerfile=/var/lib/minikube/build/build.251885758 --output type=image,name=localhost/my-image:functional-420572: (3.368006194s)
I0524 18:46:02.717270   86190 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.251885758
I0524 18:46:02.728162   86190 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.251885758.tar
I0524 18:46:02.745387   86190 build_images.go:207] Built localhost/my-image:functional-420572 from /tmp/build.251885758.tar
I0524 18:46:02.745419   86190 build_images.go:123] succeeded building to: functional-420572
I0524 18:46:02.745424   86190 build_images.go:124] failed building to: 
I0524 18:46:02.745455   86190 main.go:141] libmachine: Making call to close driver server
I0524 18:46:02.745471   86190 main.go:141] libmachine: (functional-420572) Calling .Close
I0524 18:46:02.745811   86190 main.go:141] libmachine: (functional-420572) DBG | Closing plugin on server side
I0524 18:46:02.745859   86190 main.go:141] libmachine: Successfully made call to close driver server
I0524 18:46:02.745870   86190 main.go:141] libmachine: Making call to close connection to plugin binary
I0524 18:46:02.745881   86190 main.go:141] libmachine: Making call to close driver server
I0524 18:46:02.745895   86190 main.go:141] libmachine: (functional-420572) Calling .Close
I0524 18:46:02.746243   86190 main.go:141] libmachine: Successfully made call to close driver server
I0524 18:46:02.746262   86190 main.go:141] libmachine: Making call to close connection to plugin binary
I0524 18:46:02.746285   86190 main.go:141] libmachine: (functional-420572) DBG | Closing plugin on server side
functional_test.go:446: (dbg) Run:  out/minikube-linux-amd64 -p functional-420572 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:340: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:345: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-420572
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.95s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:353: (dbg) Run:  out/minikube-linux-amd64 -p functional-420572 image load --daemon gcr.io/google-containers/addon-resizer:functional-420572 --alsologtostderr
functional_test.go:353: (dbg) Done: out/minikube-linux-amd64 -p functional-420572 image load --daemon gcr.io/google-containers/addon-resizer:functional-420572 --alsologtostderr: (3.972847067s)
functional_test.go:446: (dbg) Run:  out/minikube-linux-amd64 -p functional-420572 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (4.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p functional-420572 image load --daemon gcr.io/google-containers/addon-resizer:functional-420572 --alsologtostderr
functional_test.go:363: (dbg) Done: out/minikube-linux-amd64 -p functional-420572 image load --daemon gcr.io/google-containers/addon-resizer:functional-420572 --alsologtostderr: (4.037688637s)
functional_test.go:446: (dbg) Run:  out/minikube-linux-amd64 -p functional-420572 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (4.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:233: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:238: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-420572
functional_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-420572 image load --daemon gcr.io/google-containers/addon-resizer:functional-420572 --alsologtostderr
functional_test.go:243: (dbg) Done: out/minikube-linux-amd64 -p functional-420572 image load --daemon gcr.io/google-containers/addon-resizer:functional-420572 --alsologtostderr: (4.946321618s)
functional_test.go:446: (dbg) Run:  out/minikube-linux-amd64 -p functional-420572 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.10s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-420572 /tmp/TestFunctionalparallelMountCmdany-port656053152/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1684953944027769818" to /tmp/TestFunctionalparallelMountCmdany-port656053152/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1684953944027769818" to /tmp/TestFunctionalparallelMountCmdany-port656053152/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1684953944027769818" to /tmp/TestFunctionalparallelMountCmdany-port656053152/001/test-1684953944027769818
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-420572 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-420572 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (247.289581ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-420572 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-420572 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 May 24 18:45 created-by-test
-rw-r--r-- 1 docker docker 24 May 24 18:45 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 May 24 18:45 test-1684953944027769818
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-420572 ssh cat /mount-9p/test-1684953944027769818
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-420572 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [85cfd60b-a560-4267-b894-fc75e24cb743] Pending
helpers_test.go:344: "busybox-mount" [85cfd60b-a560-4267-b894-fc75e24cb743] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [85cfd60b-a560-4267-b894-fc75e24cb743] Running
helpers_test.go:344: "busybox-mount" [85cfd60b-a560-4267-b894-fc75e24cb743] Running: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [85cfd60b-a560-4267-b894-fc75e24cb743] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.017783762s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-420572 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-420572 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-420572 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-420572 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-420572 /tmp/TestFunctionalparallelMountCmdany-port656053152/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.92s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1457: (dbg) Run:  out/minikube-linux-amd64 -p functional-420572 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1487: (dbg) Run:  out/minikube-linux-amd64 -p functional-420572 service list -o json
functional_test.go:1492: Took "446.26006ms" to run "out/minikube-linux-amd64 -p functional-420572 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1507: (dbg) Run:  out/minikube-linux-amd64 -p functional-420572 service --namespace=default --https --url hello-node
functional_test.go:1520: found endpoint: https://192.168.50.139:32168
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1538: (dbg) Run:  out/minikube-linux-amd64 -p functional-420572 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1557: (dbg) Run:  out/minikube-linux-amd64 -p functional-420572 service hello-node --url
functional_test.go:1563: found endpoint for hello-node: http://192.168.50.139:32168
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:378: (dbg) Run:  out/minikube-linux-amd64 -p functional-420572 image save gcr.io/google-containers/addon-resizer:functional-420572 /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:378: (dbg) Done: out/minikube-linux-amd64 -p functional-420572 image save gcr.io/google-containers/addon-resizer:functional-420572 /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar --alsologtostderr: (1.730905896s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.73s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:390: (dbg) Run:  out/minikube-linux-amd64 -p functional-420572 image rm gcr.io/google-containers/addon-resizer:functional-420572 --alsologtostderr
functional_test.go:446: (dbg) Run:  out/minikube-linux-amd64 -p functional-420572 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-420572 image load /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:407: (dbg) Done: out/minikube-linux-amd64 -p functional-420572 image load /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar --alsologtostderr: (1.796214963s)
functional_test.go:446: (dbg) Run:  out/minikube-linux-amd64 -p functional-420572 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.04s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-420572 /tmp/TestFunctionalparallelMountCmdspecific-port3343306408/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-420572 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-420572 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (260.343058ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-420572 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-420572 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-420572 /tmp/TestFunctionalparallelMountCmdspecific-port3343306408/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-420572 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-420572 ssh "sudo umount -f /mount-9p": exit status 1 (193.540163ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-420572 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-420572 /tmp/TestFunctionalparallelMountCmdspecific-port3343306408/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.91s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-420572 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3419686780/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-420572 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3419686780/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-420572 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3419686780/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-420572 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-420572 ssh "findmnt -T" /mount1: exit status 1 (294.052945ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-420572 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-420572 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-420572 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-420572 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-420572 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3419686780/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-420572 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3419686780/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-420572 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3419686780/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:417: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-420572
functional_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p functional-420572 image save --daemon gcr.io/google-containers/addon-resizer:functional-420572 --alsologtostderr
functional_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p functional-420572 image save --daemon gcr.io/google-containers/addon-resizer:functional-420572 --alsologtostderr: (1.564863926s)
functional_test.go:427: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-420572
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.61s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2114: (dbg) Run:  out/minikube-linux-amd64 -p functional-420572 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2114: (dbg) Run:  out/minikube-linux-amd64 -p functional-420572 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2114: (dbg) Run:  out/minikube-linux-amd64 -p functional-420572 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.10s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:188: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:188: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-420572
--- PASS: TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:196: (dbg) Run:  docker rmi -f localhost/my-image:functional-420572
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:204: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-420572
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (133.73s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-889011 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd
E0524 18:46:46.384827   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/addons-934336/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-889011 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (2m13.72671178s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (133.73s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (10.75s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-889011 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-889011 addons enable ingress --alsologtostderr -v=5: (10.746101658s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (10.75s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.38s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-889011 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.38s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (38.52s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:183: (dbg) Run:  kubectl --context ingress-addon-legacy-889011 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:183: (dbg) Done: kubectl --context ingress-addon-legacy-889011 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (10.086501458s)
addons_test.go:208: (dbg) Run:  kubectl --context ingress-addon-legacy-889011 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:221: (dbg) Run:  kubectl --context ingress-addon-legacy-889011 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:226: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [aa76c8d7-3ce0-4ffb-9890-0bc5e021039d] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
E0524 18:49:02.539065   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/addons-934336/client.crt: no such file or directory
helpers_test.go:344: "nginx" [aa76c8d7-3ce0-4ffb-9890-0bc5e021039d] Running
addons_test.go:226: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 9.008412745s
addons_test.go:238: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-889011 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Run:  kubectl --context ingress-addon-legacy-889011 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:267: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-889011 ip
addons_test.go:273: (dbg) Run:  nslookup hello-john.test 192.168.39.100
addons_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-889011 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-889011 addons disable ingress-dns --alsologtostderr -v=1: (10.942180884s)
addons_test.go:287: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-889011 addons disable ingress --alsologtostderr -v=1
addons_test.go:287: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-889011 addons disable ingress --alsologtostderr -v=1: (7.339954277s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddons (38.52s)

                                                
                                    
x
+
TestJSONOutput/start/Command (76.4s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-775945 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=containerd
E0524 18:49:30.225467   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/addons-934336/client.crt: no such file or directory
E0524 18:50:33.043974   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/functional-420572/client.crt: no such file or directory
E0524 18:50:33.049267   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/functional-420572/client.crt: no such file or directory
E0524 18:50:33.059532   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/functional-420572/client.crt: no such file or directory
E0524 18:50:33.079763   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/functional-420572/client.crt: no such file or directory
E0524 18:50:33.120170   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/functional-420572/client.crt: no such file or directory
E0524 18:50:33.200493   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/functional-420572/client.crt: no such file or directory
E0524 18:50:33.360973   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/functional-420572/client.crt: no such file or directory
E0524 18:50:33.681598   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/functional-420572/client.crt: no such file or directory
E0524 18:50:34.322552   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/functional-420572/client.crt: no such file or directory
E0524 18:50:35.603068   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/functional-420572/client.crt: no such file or directory
E0524 18:50:38.163848   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/functional-420572/client.crt: no such file or directory
E0524 18:50:43.284981   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/functional-420572/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-775945 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=containerd: (1m16.400257411s)
--- PASS: TestJSONOutput/start/Command (76.40s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.58s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-775945 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.58s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.57s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-775945 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.57s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.09s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-775945 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-775945 --output=json --user=testUser: (7.085206249s)
--- PASS: TestJSONOutput/stop/Command (7.09s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.18s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-122408 --memory=2200 --output=json --wait=true --driver=fail
E0524 18:50:53.525476   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/functional-420572/client.crt: no such file or directory
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-122408 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (61.997355ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"7ba1b4d1-7a55-4436-a160-a3aa3f71f216","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-122408] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"fb661cbc-9e31-4d7b-b611-58c28c6c8b9a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=16573"}}
	{"specversion":"1.0","id":"ff5122dd-fd86-4b57-96ce-b7704bb74c94","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"dfbb7a4d-a8b2-4e0e-88f9-67f2fd591578","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/16573-71939/kubeconfig"}}
	{"specversion":"1.0","id":"cb5fb220-92ae-493e-8167-9f74d689aa1a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/16573-71939/.minikube"}}
	{"specversion":"1.0","id":"aa5b8af4-3518-4a4c-98a6-faf3fc0eaddc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"13bb2ccd-10cc-40c3-b61e-6358dea68089","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"6c43789a-73ad-4e6f-9110-4aac5bf6b539","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-122408" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-122408
--- PASS: TestErrorJSONOutput (0.18s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (109.42s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-117768 --driver=kvm2  --container-runtime=containerd
E0524 18:51:14.006401   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/functional-420572/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-117768 --driver=kvm2  --container-runtime=containerd: (54.123787513s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-119884 --driver=kvm2  --container-runtime=containerd
E0524 18:51:54.968133   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/functional-420572/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-119884 --driver=kvm2  --container-runtime=containerd: (52.597256632s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-117768
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-119884
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-119884" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-119884
helpers_test.go:175: Cleaning up "first-117768" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-117768
--- PASS: TestMinikubeProfile (109.42s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (27.08s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-479202 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-479202 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (26.083474059s)
--- PASS: TestMountStart/serial/StartWithMountFirst (27.08s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-479202 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-479202 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.37s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (30.3s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-503098 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd
E0524 18:53:16.891768   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/functional-420572/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-503098 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (29.304511135s)
--- PASS: TestMountStart/serial/StartWithMountSecond (30.30s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-503098 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-503098 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.36s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.24s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-479202 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-479202 --alsologtostderr -v=5: (1.242133265s)
--- PASS: TestMountStart/serial/DeleteFirst (1.24s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.42s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-503098 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-503098 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.42s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.19s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-503098
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-503098: (1.188044058s)
--- PASS: TestMountStart/serial/Stop (1.19s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (22.98s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-503098
E0524 18:53:48.772624   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/ingress-addon-legacy-889011/client.crt: no such file or directory
E0524 18:53:48.777931   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/ingress-addon-legacy-889011/client.crt: no such file or directory
E0524 18:53:48.788201   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/ingress-addon-legacy-889011/client.crt: no such file or directory
E0524 18:53:48.808440   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/ingress-addon-legacy-889011/client.crt: no such file or directory
E0524 18:53:48.848679   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/ingress-addon-legacy-889011/client.crt: no such file or directory
E0524 18:53:48.928967   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/ingress-addon-legacy-889011/client.crt: no such file or directory
E0524 18:53:49.089334   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/ingress-addon-legacy-889011/client.crt: no such file or directory
E0524 18:53:49.409900   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/ingress-addon-legacy-889011/client.crt: no such file or directory
E0524 18:53:50.050836   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/ingress-addon-legacy-889011/client.crt: no such file or directory
E0524 18:53:51.331318   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/ingress-addon-legacy-889011/client.crt: no such file or directory
E0524 18:53:53.891716   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/ingress-addon-legacy-889011/client.crt: no such file or directory
E0524 18:53:59.012688   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/ingress-addon-legacy-889011/client.crt: no such file or directory
E0524 18:54:02.538152   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/addons-934336/client.crt: no such file or directory
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-503098: (21.98009402s)
--- PASS: TestMountStart/serial/RestartStopped (22.98s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-503098 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-503098 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.37s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (120.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-053110 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E0524 18:54:09.253738   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/ingress-addon-legacy-889011/client.crt: no such file or directory
E0524 18:54:29.734709   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/ingress-addon-legacy-889011/client.crt: no such file or directory
E0524 18:55:10.695726   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/ingress-addon-legacy-889011/client.crt: no such file or directory
E0524 18:55:33.044363   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/functional-420572/client.crt: no such file or directory
E0524 18:56:00.732774   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/functional-420572/client.crt: no such file or directory
multinode_test.go:85: (dbg) Done: out/minikube-linux-amd64 start -p multinode-053110 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (2m0.10315071s)
multinode_test.go:91: (dbg) Run:  out/minikube-linux-amd64 -p multinode-053110 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (120.50s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (3.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:481: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-053110 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:486: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-053110 -- rollout status deployment/busybox
multinode_test.go:486: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-053110 -- rollout status deployment/busybox: (2.154758385s)
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-053110 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:516: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-053110 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:524: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-053110 -- exec busybox-67b7f59bb-2nvsj -- nslookup kubernetes.io
multinode_test.go:524: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-053110 -- exec busybox-67b7f59bb-h5lm7 -- nslookup kubernetes.io
multinode_test.go:534: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-053110 -- exec busybox-67b7f59bb-2nvsj -- nslookup kubernetes.default
multinode_test.go:534: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-053110 -- exec busybox-67b7f59bb-h5lm7 -- nslookup kubernetes.default
multinode_test.go:542: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-053110 -- exec busybox-67b7f59bb-2nvsj -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:542: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-053110 -- exec busybox-67b7f59bb-h5lm7 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (3.94s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-053110 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:560: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-053110 -- exec busybox-67b7f59bb-2nvsj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-053110 -- exec busybox-67b7f59bb-2nvsj -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:560: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-053110 -- exec busybox-67b7f59bb-h5lm7 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-053110 -- exec busybox-67b7f59bb-h5lm7 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.80s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (52.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:110: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-053110 -v 3 --alsologtostderr
E0524 18:56:32.616890   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/ingress-addon-legacy-889011/client.crt: no such file or directory
multinode_test.go:110: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-053110 -v 3 --alsologtostderr: (51.824678443s)
multinode_test.go:116: (dbg) Run:  out/minikube-linux-amd64 -p multinode-053110 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (52.38s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:132: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.22s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:173: (dbg) Run:  out/minikube-linux-amd64 -p multinode-053110 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-053110 cp testdata/cp-test.txt multinode-053110:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-053110 ssh -n multinode-053110 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-053110 cp multinode-053110:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2807694279/001/cp-test_multinode-053110.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-053110 ssh -n multinode-053110 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-053110 cp multinode-053110:/home/docker/cp-test.txt multinode-053110-m02:/home/docker/cp-test_multinode-053110_multinode-053110-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-053110 ssh -n multinode-053110 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-053110 ssh -n multinode-053110-m02 "sudo cat /home/docker/cp-test_multinode-053110_multinode-053110-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-053110 cp multinode-053110:/home/docker/cp-test.txt multinode-053110-m03:/home/docker/cp-test_multinode-053110_multinode-053110-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-053110 ssh -n multinode-053110 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-053110 ssh -n multinode-053110-m03 "sudo cat /home/docker/cp-test_multinode-053110_multinode-053110-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-053110 cp testdata/cp-test.txt multinode-053110-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-053110 ssh -n multinode-053110-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-053110 cp multinode-053110-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2807694279/001/cp-test_multinode-053110-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-053110 ssh -n multinode-053110-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-053110 cp multinode-053110-m02:/home/docker/cp-test.txt multinode-053110:/home/docker/cp-test_multinode-053110-m02_multinode-053110.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-053110 ssh -n multinode-053110-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-053110 ssh -n multinode-053110 "sudo cat /home/docker/cp-test_multinode-053110-m02_multinode-053110.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-053110 cp multinode-053110-m02:/home/docker/cp-test.txt multinode-053110-m03:/home/docker/cp-test_multinode-053110-m02_multinode-053110-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-053110 ssh -n multinode-053110-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-053110 ssh -n multinode-053110-m03 "sudo cat /home/docker/cp-test_multinode-053110-m02_multinode-053110-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-053110 cp testdata/cp-test.txt multinode-053110-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-053110 ssh -n multinode-053110-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-053110 cp multinode-053110-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2807694279/001/cp-test_multinode-053110-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-053110 ssh -n multinode-053110-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-053110 cp multinode-053110-m03:/home/docker/cp-test.txt multinode-053110:/home/docker/cp-test_multinode-053110-m03_multinode-053110.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-053110 ssh -n multinode-053110-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-053110 ssh -n multinode-053110 "sudo cat /home/docker/cp-test_multinode-053110-m03_multinode-053110.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-053110 cp multinode-053110-m03:/home/docker/cp-test.txt multinode-053110-m02:/home/docker/cp-test_multinode-053110-m03_multinode-053110-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-053110 ssh -n multinode-053110-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-053110 ssh -n multinode-053110-m02 "sudo cat /home/docker/cp-test_multinode-053110-m03_multinode-053110-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.11s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:210: (dbg) Run:  out/minikube-linux-amd64 -p multinode-053110 node stop m03
multinode_test.go:210: (dbg) Done: out/minikube-linux-amd64 -p multinode-053110 node stop m03: (1.229295674s)
multinode_test.go:216: (dbg) Run:  out/minikube-linux-amd64 -p multinode-053110 status
multinode_test.go:216: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-053110 status: exit status 7 (410.274601ms)

                                                
                                                
-- stdout --
	multinode-053110
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-053110-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-053110-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:223: (dbg) Run:  out/minikube-linux-amd64 -p multinode-053110 status --alsologtostderr
multinode_test.go:223: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-053110 status --alsologtostderr: exit status 7 (405.369885ms)

                                                
                                                
-- stdout --
	multinode-053110
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-053110-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-053110-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0524 18:57:15.026615   92967 out.go:296] Setting OutFile to fd 1 ...
	I0524 18:57:15.026736   92967 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0524 18:57:15.026746   92967 out.go:309] Setting ErrFile to fd 2...
	I0524 18:57:15.026753   92967 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0524 18:57:15.026865   92967 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16573-71939/.minikube/bin
	I0524 18:57:15.027048   92967 out.go:303] Setting JSON to false
	I0524 18:57:15.027072   92967 mustload.go:65] Loading cluster: multinode-053110
	I0524 18:57:15.027181   92967 notify.go:220] Checking for updates...
	I0524 18:57:15.027559   92967 config.go:182] Loaded profile config "multinode-053110": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.27.2
	I0524 18:57:15.027577   92967 status.go:255] checking status of multinode-053110 ...
	I0524 18:57:15.028074   92967 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0524 18:57:15.028121   92967 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0524 18:57:15.042723   92967 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43385
	I0524 18:57:15.043126   92967 main.go:141] libmachine: () Calling .GetVersion
	I0524 18:57:15.043722   92967 main.go:141] libmachine: Using API Version  1
	I0524 18:57:15.043748   92967 main.go:141] libmachine: () Calling .SetConfigRaw
	I0524 18:57:15.044095   92967 main.go:141] libmachine: () Calling .GetMachineName
	I0524 18:57:15.044277   92967 main.go:141] libmachine: (multinode-053110) Calling .GetState
	I0524 18:57:15.045744   92967 status.go:330] multinode-053110 host status = "Running" (err=<nil>)
	I0524 18:57:15.045759   92967 host.go:66] Checking if "multinode-053110" exists ...
	I0524 18:57:15.046022   92967 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0524 18:57:15.046057   92967 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0524 18:57:15.060171   92967 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45687
	I0524 18:57:15.060489   92967 main.go:141] libmachine: () Calling .GetVersion
	I0524 18:57:15.060861   92967 main.go:141] libmachine: Using API Version  1
	I0524 18:57:15.060883   92967 main.go:141] libmachine: () Calling .SetConfigRaw
	I0524 18:57:15.061174   92967 main.go:141] libmachine: () Calling .GetMachineName
	I0524 18:57:15.061326   92967 main.go:141] libmachine: (multinode-053110) Calling .GetIP
	I0524 18:57:15.063540   92967 main.go:141] libmachine: (multinode-053110) DBG | domain multinode-053110 has defined MAC address 52:54:00:08:98:07 in network mk-multinode-053110
	I0524 18:57:15.063893   92967 main.go:141] libmachine: (multinode-053110) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:98:07", ip: ""} in network mk-multinode-053110: {Iface:virbr1 ExpiryTime:2023-05-24 19:54:23 +0000 UTC Type:0 Mac:52:54:00:08:98:07 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:multinode-053110 Clientid:01:52:54:00:08:98:07}
	I0524 18:57:15.063928   92967 main.go:141] libmachine: (multinode-053110) DBG | domain multinode-053110 has defined IP address 192.168.39.189 and MAC address 52:54:00:08:98:07 in network mk-multinode-053110
	I0524 18:57:15.063992   92967 host.go:66] Checking if "multinode-053110" exists ...
	I0524 18:57:15.064294   92967 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0524 18:57:15.064333   92967 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0524 18:57:15.077860   92967 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41341
	I0524 18:57:15.078184   92967 main.go:141] libmachine: () Calling .GetVersion
	I0524 18:57:15.078542   92967 main.go:141] libmachine: Using API Version  1
	I0524 18:57:15.078565   92967 main.go:141] libmachine: () Calling .SetConfigRaw
	I0524 18:57:15.078850   92967 main.go:141] libmachine: () Calling .GetMachineName
	I0524 18:57:15.079036   92967 main.go:141] libmachine: (multinode-053110) Calling .DriverName
	I0524 18:57:15.079201   92967 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0524 18:57:15.079219   92967 main.go:141] libmachine: (multinode-053110) Calling .GetSSHHostname
	I0524 18:57:15.081360   92967 main.go:141] libmachine: (multinode-053110) DBG | domain multinode-053110 has defined MAC address 52:54:00:08:98:07 in network mk-multinode-053110
	I0524 18:57:15.081683   92967 main.go:141] libmachine: (multinode-053110) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:98:07", ip: ""} in network mk-multinode-053110: {Iface:virbr1 ExpiryTime:2023-05-24 19:54:23 +0000 UTC Type:0 Mac:52:54:00:08:98:07 Iaid: IPaddr:192.168.39.189 Prefix:24 Hostname:multinode-053110 Clientid:01:52:54:00:08:98:07}
	I0524 18:57:15.081712   92967 main.go:141] libmachine: (multinode-053110) DBG | domain multinode-053110 has defined IP address 192.168.39.189 and MAC address 52:54:00:08:98:07 in network mk-multinode-053110
	I0524 18:57:15.081854   92967 main.go:141] libmachine: (multinode-053110) Calling .GetSSHPort
	I0524 18:57:15.082018   92967 main.go:141] libmachine: (multinode-053110) Calling .GetSSHKeyPath
	I0524 18:57:15.082165   92967 main.go:141] libmachine: (multinode-053110) Calling .GetSSHUsername
	I0524 18:57:15.082291   92967 sshutil.go:53] new ssh client: &{IP:192.168.39.189 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16573-71939/.minikube/machines/multinode-053110/id_rsa Username:docker}
	I0524 18:57:15.171936   92967 ssh_runner.go:195] Run: systemctl --version
	I0524 18:57:15.177218   92967 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0524 18:57:15.190598   92967 kubeconfig.go:92] found "multinode-053110" server: "https://192.168.39.189:8443"
	I0524 18:57:15.190619   92967 api_server.go:166] Checking apiserver status ...
	I0524 18:57:15.190647   92967 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0524 18:57:15.202083   92967 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1127/cgroup
	I0524 18:57:15.210476   92967 api_server.go:182] apiserver freezer: "8:freezer:/kubepods/burstable/pod44e11f3857bf93351607f8ecbe095d41/1fe2cb24b246b221c1762b279fba052517730e11703757d5e745e4a3ac1d54db"
	I0524 18:57:15.210533   92967 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod44e11f3857bf93351607f8ecbe095d41/1fe2cb24b246b221c1762b279fba052517730e11703757d5e745e4a3ac1d54db/freezer.state
	I0524 18:57:15.219398   92967 api_server.go:204] freezer state: "THAWED"
	I0524 18:57:15.219416   92967 api_server.go:253] Checking apiserver healthz at https://192.168.39.189:8443/healthz ...
	I0524 18:57:15.224397   92967 api_server.go:279] https://192.168.39.189:8443/healthz returned 200:
	ok
	I0524 18:57:15.224420   92967 status.go:421] multinode-053110 apiserver status = Running (err=<nil>)
	I0524 18:57:15.224432   92967 status.go:257] multinode-053110 status: &{Name:multinode-053110 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0524 18:57:15.224453   92967 status.go:255] checking status of multinode-053110-m02 ...
	I0524 18:57:15.224718   92967 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0524 18:57:15.224741   92967 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0524 18:57:15.238770   92967 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37185
	I0524 18:57:15.239104   92967 main.go:141] libmachine: () Calling .GetVersion
	I0524 18:57:15.239571   92967 main.go:141] libmachine: Using API Version  1
	I0524 18:57:15.239590   92967 main.go:141] libmachine: () Calling .SetConfigRaw
	I0524 18:57:15.239879   92967 main.go:141] libmachine: () Calling .GetMachineName
	I0524 18:57:15.240060   92967 main.go:141] libmachine: (multinode-053110-m02) Calling .GetState
	I0524 18:57:15.241554   92967 status.go:330] multinode-053110-m02 host status = "Running" (err=<nil>)
	I0524 18:57:15.241574   92967 host.go:66] Checking if "multinode-053110-m02" exists ...
	I0524 18:57:15.241823   92967 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0524 18:57:15.241856   92967 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0524 18:57:15.255193   92967 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37479
	I0524 18:57:15.255538   92967 main.go:141] libmachine: () Calling .GetVersion
	I0524 18:57:15.255900   92967 main.go:141] libmachine: Using API Version  1
	I0524 18:57:15.255924   92967 main.go:141] libmachine: () Calling .SetConfigRaw
	I0524 18:57:15.256202   92967 main.go:141] libmachine: () Calling .GetMachineName
	I0524 18:57:15.256351   92967 main.go:141] libmachine: (multinode-053110-m02) Calling .GetIP
	I0524 18:57:15.258724   92967 main.go:141] libmachine: (multinode-053110-m02) DBG | domain multinode-053110-m02 has defined MAC address 52:54:00:a3:19:0b in network mk-multinode-053110
	I0524 18:57:15.259064   92967 main.go:141] libmachine: (multinode-053110-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:19:0b", ip: ""} in network mk-multinode-053110: {Iface:virbr1 ExpiryTime:2023-05-24 19:55:36 +0000 UTC Type:0 Mac:52:54:00:a3:19:0b Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:multinode-053110-m02 Clientid:01:52:54:00:a3:19:0b}
	I0524 18:57:15.259094   92967 main.go:141] libmachine: (multinode-053110-m02) DBG | domain multinode-053110-m02 has defined IP address 192.168.39.244 and MAC address 52:54:00:a3:19:0b in network mk-multinode-053110
	I0524 18:57:15.259194   92967 host.go:66] Checking if "multinode-053110-m02" exists ...
	I0524 18:57:15.259501   92967 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0524 18:57:15.259535   92967 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0524 18:57:15.272371   92967 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33907
	I0524 18:57:15.272662   92967 main.go:141] libmachine: () Calling .GetVersion
	I0524 18:57:15.273053   92967 main.go:141] libmachine: Using API Version  1
	I0524 18:57:15.273085   92967 main.go:141] libmachine: () Calling .SetConfigRaw
	I0524 18:57:15.273384   92967 main.go:141] libmachine: () Calling .GetMachineName
	I0524 18:57:15.273582   92967 main.go:141] libmachine: (multinode-053110-m02) Calling .DriverName
	I0524 18:57:15.273771   92967 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0524 18:57:15.273789   92967 main.go:141] libmachine: (multinode-053110-m02) Calling .GetSSHHostname
	I0524 18:57:15.275928   92967 main.go:141] libmachine: (multinode-053110-m02) DBG | domain multinode-053110-m02 has defined MAC address 52:54:00:a3:19:0b in network mk-multinode-053110
	I0524 18:57:15.276317   92967 main.go:141] libmachine: (multinode-053110-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a3:19:0b", ip: ""} in network mk-multinode-053110: {Iface:virbr1 ExpiryTime:2023-05-24 19:55:36 +0000 UTC Type:0 Mac:52:54:00:a3:19:0b Iaid: IPaddr:192.168.39.244 Prefix:24 Hostname:multinode-053110-m02 Clientid:01:52:54:00:a3:19:0b}
	I0524 18:57:15.276345   92967 main.go:141] libmachine: (multinode-053110-m02) DBG | domain multinode-053110-m02 has defined IP address 192.168.39.244 and MAC address 52:54:00:a3:19:0b in network mk-multinode-053110
	I0524 18:57:15.276459   92967 main.go:141] libmachine: (multinode-053110-m02) Calling .GetSSHPort
	I0524 18:57:15.276634   92967 main.go:141] libmachine: (multinode-053110-m02) Calling .GetSSHKeyPath
	I0524 18:57:15.276788   92967 main.go:141] libmachine: (multinode-053110-m02) Calling .GetSSHUsername
	I0524 18:57:15.276897   92967 sshutil.go:53] new ssh client: &{IP:192.168.39.244 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16573-71939/.minikube/machines/multinode-053110-m02/id_rsa Username:docker}
	I0524 18:57:15.363491   92967 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0524 18:57:15.376594   92967 status.go:257] multinode-053110-m02 status: &{Name:multinode-053110-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0524 18:57:15.376623   92967 status.go:255] checking status of multinode-053110-m03 ...
	I0524 18:57:15.376949   92967 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0524 18:57:15.376980   92967 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0524 18:57:15.391115   92967 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43315
	I0524 18:57:15.391459   92967 main.go:141] libmachine: () Calling .GetVersion
	I0524 18:57:15.391887   92967 main.go:141] libmachine: Using API Version  1
	I0524 18:57:15.391902   92967 main.go:141] libmachine: () Calling .SetConfigRaw
	I0524 18:57:15.392290   92967 main.go:141] libmachine: () Calling .GetMachineName
	I0524 18:57:15.392473   92967 main.go:141] libmachine: (multinode-053110-m03) Calling .GetState
	I0524 18:57:15.393942   92967 status.go:330] multinode-053110-m03 host status = "Stopped" (err=<nil>)
	I0524 18:57:15.393954   92967 status.go:343] host is not running, skipping remaining checks
	I0524 18:57:15.393959   92967 status.go:257] multinode-053110-m03 status: &{Name:multinode-053110-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.05s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (71.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-053110 node start m03 --alsologtostderr
multinode_test.go:254: (dbg) Done: out/minikube-linux-amd64 -p multinode-053110 node start m03 --alsologtostderr: (1m10.471810961s)
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-053110 status
multinode_test.go:275: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (71.09s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (513.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:283: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-053110
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-053110
E0524 18:58:48.772598   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/ingress-addon-legacy-889011/client.crt: no such file or directory
E0524 18:59:02.538993   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/addons-934336/client.crt: no such file or directory
E0524 18:59:16.459476   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/ingress-addon-legacy-889011/client.crt: no such file or directory
E0524 19:00:25.588145   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/addons-934336/client.crt: no such file or directory
E0524 19:00:33.044230   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/functional-420572/client.crt: no such file or directory
multinode_test.go:290: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-053110: (3m4.595790101s)
multinode_test.go:295: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-053110 --wait=true -v=8 --alsologtostderr
E0524 19:03:48.772919   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/ingress-addon-legacy-889011/client.crt: no such file or directory
E0524 19:04:02.539147   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/addons-934336/client.crt: no such file or directory
E0524 19:05:33.044261   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/functional-420572/client.crt: no such file or directory
E0524 19:06:56.093992   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/functional-420572/client.crt: no such file or directory
multinode_test.go:295: (dbg) Done: out/minikube-linux-amd64 start -p multinode-053110 --wait=true -v=8 --alsologtostderr: (5m29.221607991s)
multinode_test.go:300: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-053110
--- PASS: TestMultiNode/serial/RestartKeepsNodes (513.89s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (1.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:394: (dbg) Run:  out/minikube-linux-amd64 -p multinode-053110 node delete m03
multinode_test.go:394: (dbg) Done: out/minikube-linux-amd64 -p multinode-053110 node delete m03: (1.434985117s)
multinode_test.go:400: (dbg) Run:  out/minikube-linux-amd64 -p multinode-053110 status --alsologtostderr
multinode_test.go:424: (dbg) Run:  kubectl get nodes
multinode_test.go:432: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (1.94s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (184.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p multinode-053110 stop
E0524 19:08:48.772991   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/ingress-addon-legacy-889011/client.crt: no such file or directory
E0524 19:09:02.541524   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/addons-934336/client.crt: no such file or directory
multinode_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p multinode-053110 stop: (3m3.915261287s)
multinode_test.go:320: (dbg) Run:  out/minikube-linux-amd64 -p multinode-053110 status
multinode_test.go:320: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-053110 status: exit status 7 (80.400162ms)

                                                
                                                
-- stdout --
	multinode-053110
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-053110-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:327: (dbg) Run:  out/minikube-linux-amd64 -p multinode-053110 status --alsologtostderr
multinode_test.go:327: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-053110 status --alsologtostderr: exit status 7 (72.775898ms)

                                                
                                                
-- stdout --
	multinode-053110
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-053110-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0524 19:10:06.355660   95864 out.go:296] Setting OutFile to fd 1 ...
	I0524 19:10:06.355821   95864 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0524 19:10:06.355830   95864 out.go:309] Setting ErrFile to fd 2...
	I0524 19:10:06.355837   95864 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0524 19:10:06.355939   95864 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16573-71939/.minikube/bin
	I0524 19:10:06.356104   95864 out.go:303] Setting JSON to false
	I0524 19:10:06.356132   95864 mustload.go:65] Loading cluster: multinode-053110
	I0524 19:10:06.356223   95864 notify.go:220] Checking for updates...
	I0524 19:10:06.356516   95864 config.go:182] Loaded profile config "multinode-053110": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.27.2
	I0524 19:10:06.356537   95864 status.go:255] checking status of multinode-053110 ...
	I0524 19:10:06.356863   95864 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0524 19:10:06.356922   95864 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0524 19:10:06.370378   95864 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39125
	I0524 19:10:06.370706   95864 main.go:141] libmachine: () Calling .GetVersion
	I0524 19:10:06.371205   95864 main.go:141] libmachine: Using API Version  1
	I0524 19:10:06.371225   95864 main.go:141] libmachine: () Calling .SetConfigRaw
	I0524 19:10:06.371546   95864 main.go:141] libmachine: () Calling .GetMachineName
	I0524 19:10:06.371702   95864 main.go:141] libmachine: (multinode-053110) Calling .GetState
	I0524 19:10:06.373004   95864 status.go:330] multinode-053110 host status = "Stopped" (err=<nil>)
	I0524 19:10:06.373017   95864 status.go:343] host is not running, skipping remaining checks
	I0524 19:10:06.373021   95864 status.go:257] multinode-053110 status: &{Name:multinode-053110 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0524 19:10:06.373061   95864 status.go:255] checking status of multinode-053110-m02 ...
	I0524 19:10:06.373310   95864 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0524 19:10:06.373344   95864 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0524 19:10:06.386717   95864 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41977
	I0524 19:10:06.388401   95864 main.go:141] libmachine: () Calling .GetVersion
	I0524 19:10:06.388870   95864 main.go:141] libmachine: Using API Version  1
	I0524 19:10:06.388892   95864 main.go:141] libmachine: () Calling .SetConfigRaw
	I0524 19:10:06.389197   95864 main.go:141] libmachine: () Calling .GetMachineName
	I0524 19:10:06.389380   95864 main.go:141] libmachine: (multinode-053110-m02) Calling .GetState
	I0524 19:10:06.390752   95864 status.go:330] multinode-053110-m02 host status = "Stopped" (err=<nil>)
	I0524 19:10:06.390765   95864 status.go:343] host is not running, skipping remaining checks
	I0524 19:10:06.390770   95864 status.go:257] multinode-053110-m02 status: &{Name:multinode-053110-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (184.07s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (235.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:354: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-053110 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E0524 19:10:11.820019   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/ingress-addon-legacy-889011/client.crt: no such file or directory
E0524 19:10:33.044395   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/functional-420572/client.crt: no such file or directory
E0524 19:13:48.773310   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/ingress-addon-legacy-889011/client.crt: no such file or directory
multinode_test.go:354: (dbg) Done: out/minikube-linux-amd64 start -p multinode-053110 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (3m54.740584397s)
multinode_test.go:360: (dbg) Run:  out/minikube-linux-amd64 -p multinode-053110 status --alsologtostderr
multinode_test.go:374: (dbg) Run:  kubectl get nodes
multinode_test.go:382: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (235.26s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (54.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:443: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-053110
multinode_test.go:452: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-053110-m02 --driver=kvm2  --container-runtime=containerd
multinode_test.go:452: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-053110-m02 --driver=kvm2  --container-runtime=containerd: exit status 14 (58.147665ms)

                                                
                                                
-- stdout --
	* [multinode-053110-m02] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=16573
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16573-71939/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16573-71939/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-053110-m02' is duplicated with machine name 'multinode-053110-m02' in profile 'multinode-053110'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:460: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-053110-m03 --driver=kvm2  --container-runtime=containerd
E0524 19:14:02.538609   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/addons-934336/client.crt: no such file or directory
multinode_test.go:460: (dbg) Done: out/minikube-linux-amd64 start -p multinode-053110-m03 --driver=kvm2  --container-runtime=containerd: (53.620809645s)
multinode_test.go:467: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-053110
multinode_test.go:467: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-053110: exit status 80 (206.195437ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-053110
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-053110-m03 already exists in multinode-053110-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-053110-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (54.91s)

                                                
                                    
x
+
TestScheduledStopUnix (125.79s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-611495 --memory=2048 --driver=kvm2  --container-runtime=containerd
E0524 19:20:33.044405   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/functional-420572/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-611495 --memory=2048 --driver=kvm2  --container-runtime=containerd: (54.257904762s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-611495 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-611495 -n scheduled-stop-611495
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-611495 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-611495 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-611495 -n scheduled-stop-611495
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-611495
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-611495 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-611495
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-611495: exit status 7 (55.825795ms)

                                                
                                                
-- stdout --
	scheduled-stop-611495
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-611495 -n scheduled-stop-611495
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-611495 -n scheduled-stop-611495: exit status 7 (63.688359ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-611495" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-611495
--- PASS: TestScheduledStopUnix (125.79s)

                                                
                                    
x
+
TestKubernetesUpgrade (238.68s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:234: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-043575 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:234: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-043575 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m45.651298886s)
version_upgrade_test.go:239: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-043575
version_upgrade_test.go:239: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-043575: (7.529548693s)
version_upgrade_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-043575 status --format={{.Host}}
version_upgrade_test.go:244: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-043575 status --format={{.Host}}: exit status 7 (76.76437ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:246: status error: exit status 7 (may be ok)
version_upgrade_test.go:255: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-043575 --memory=2200 --kubernetes-version=v1.27.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:255: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-043575 --memory=2200 --kubernetes-version=v1.27.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m35.107375141s)
version_upgrade_test.go:260: (dbg) Run:  kubectl --context kubernetes-upgrade-043575 version --output=json
version_upgrade_test.go:279: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:281: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-043575 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:281: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-043575 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2  --container-runtime=containerd: exit status 106 (76.892733ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-043575] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=16573
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16573-71939/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16573-71939/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.27.2 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-043575
	    minikube start -p kubernetes-upgrade-043575 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-0435752 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.27.2, by running:
	    
	    minikube start -p kubernetes-upgrade-043575 --kubernetes-version=v1.27.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:285: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:287: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-043575 --memory=2200 --kubernetes-version=v1.27.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
E0524 19:25:33.043726   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/functional-420572/client.crt: no such file or directory
version_upgrade_test.go:287: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-043575 --memory=2200 --kubernetes-version=v1.27.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (28.773278434s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-043575" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-043575
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-043575: (1.406684258s)
--- PASS: TestKubernetesUpgrade (238.68s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-044049 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-044049 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=containerd: exit status 14 (78.617882ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-044049] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=16573
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16573-71939/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16573-71939/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (103.18s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-044049 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-044049 --driver=kvm2  --container-runtime=containerd: (1m42.862269714s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-044049 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (103.18s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (16.17s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-044049 --no-kubernetes --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-044049 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (14.891063999s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-044049 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-044049 status -o json: exit status 2 (252.519274ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-044049","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-044049
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-044049: (1.027185124s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (16.17s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (63.84s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-044049 --no-kubernetes --driver=kvm2  --container-runtime=containerd
E0524 19:24:02.538425   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/addons-934336/client.crt: no such file or directory
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-044049 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (1m3.837435172s)
--- PASS: TestNoKubernetes/serial/Start (63.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:230: (dbg) Run:  out/minikube-linux-amd64 start -p false-036096 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=containerd
net_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-036096 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=containerd: exit status 14 (123.159935ms)

                                                
                                                
-- stdout --
	* [false-036096] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=16573
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16573-71939/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16573-71939/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0524 19:24:54.730653  102903 out.go:296] Setting OutFile to fd 1 ...
	I0524 19:24:54.730853  102903 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0524 19:24:54.730865  102903 out.go:309] Setting ErrFile to fd 2...
	I0524 19:24:54.730872  102903 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0524 19:24:54.731045  102903 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16573-71939/.minikube/bin
	I0524 19:24:54.731844  102903 out.go:303] Setting JSON to false
	I0524 19:24:54.733108  102903 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":11230,"bootTime":1684945065,"procs":237,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1034-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0524 19:24:54.733188  102903 start.go:135] virtualization: kvm guest
	I0524 19:24:54.735940  102903 out.go:177] * [false-036096] minikube v1.30.1 on Ubuntu 20.04 (kvm/amd64)
	I0524 19:24:54.737593  102903 out.go:177]   - MINIKUBE_LOCATION=16573
	I0524 19:24:54.737561  102903 notify.go:220] Checking for updates...
	I0524 19:24:54.739401  102903 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0524 19:24:54.741064  102903 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16573-71939/kubeconfig
	I0524 19:24:54.746465  102903 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16573-71939/.minikube
	I0524 19:24:54.747929  102903 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0524 19:24:54.749469  102903 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0524 19:24:54.751604  102903 config.go:182] Loaded profile config "NoKubernetes-044049": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v0.0.0
	I0524 19:24:54.751753  102903 config.go:182] Loaded profile config "kubernetes-upgrade-043575": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.27.2
	I0524 19:24:54.751867  102903 config.go:182] Loaded profile config "running-upgrade-134012": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.21.2
	I0524 19:24:54.751928  102903 driver.go:375] Setting default libvirt URI to qemu:///system
	I0524 19:24:54.795415  102903 out.go:177] * Using the kvm2 driver based on user configuration
	I0524 19:24:54.796988  102903 start.go:295] selected driver: kvm2
	I0524 19:24:54.797003  102903 start.go:870] validating driver "kvm2" against <nil>
	I0524 19:24:54.797017  102903 start.go:881] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0524 19:24:54.799543  102903 out.go:177] 
	W0524 19:24:54.801145  102903 out.go:239] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0524 19:24:54.802744  102903 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:86: 
----------------------- debugLogs start: false-036096 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-036096

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-036096

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-036096

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-036096

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-036096

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-036096

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-036096

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-036096

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-036096

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-036096

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-036096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-036096"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-036096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-036096"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-036096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-036096"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-036096

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-036096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-036096"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-036096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-036096"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-036096" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-036096" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-036096" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-036096" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-036096" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-036096" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-036096" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-036096" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-036096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-036096"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-036096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-036096"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-036096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-036096"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-036096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-036096"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-036096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-036096"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-036096" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-036096" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-036096" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-036096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-036096"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-036096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-036096"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-036096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-036096"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-036096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-036096"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-036096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-036096"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/16573-71939/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 24 May 2023 19:24:53 UTC
provider: minikube.sigs.k8s.io
version: v1.30.1
name: cluster_info
server: https://192.168.50.206:8443
name: kubernetes-upgrade-043575
contexts:
- context:
cluster: kubernetes-upgrade-043575
user: kubernetes-upgrade-043575
name: kubernetes-upgrade-043575
current-context: kubernetes-upgrade-043575
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-043575
user:
client-certificate: /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/kubernetes-upgrade-043575/client.crt
client-key: /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/kubernetes-upgrade-043575/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-036096

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-036096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-036096"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-036096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-036096"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-036096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-036096"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-036096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-036096"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-036096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-036096"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-036096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-036096"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-036096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-036096"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-036096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-036096"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-036096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-036096"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-036096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-036096"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-036096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-036096"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-036096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-036096"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-036096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-036096"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-036096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-036096"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-036096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-036096"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-036096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-036096"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-036096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-036096"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-036096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-036096"

                                                
                                                
----------------------- debugLogs end: false-036096 [took: 3.318465976s] --------------------------------
helpers_test.go:175: Cleaning up "false-036096" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-036096
--- PASS: TestNetworkPlugins/group/false (3.58s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-044049 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-044049 "sudo systemctl is-active --quiet service kubelet": exit status 1 (283.548016ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (32.89s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (18.125389728s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (14.764992806s)
--- PASS: TestNoKubernetes/serial/ProfileList (32.89s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-044049
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-044049: (1.230444777s)
--- PASS: TestNoKubernetes/serial/Stop (1.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (24.94s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-044049 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-044049 --driver=kvm2  --container-runtime=containerd: (24.938193374s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (24.94s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-044049 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-044049 "sudo systemctl is-active --quiet service kubelet": exit status 1 (241.610833ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.24s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.43s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.43s)

                                                
                                    
x
+
TestPause/serial/Start (73.98s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-859826 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=containerd
E0524 19:28:48.773494   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/ingress-addon-legacy-889011/client.crt: no such file or directory
E0524 19:29:02.538193   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/addons-934336/client.crt: no such file or directory
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-859826 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=containerd: (1m13.980382116s)
--- PASS: TestPause/serial/Start (73.98s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (7.92s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-859826 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-859826 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (7.899466876s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (7.92s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (169.38s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-848884 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-848884 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.16.0: (2m49.378262162s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (169.38s)

                                                
                                    
x
+
TestPause/serial/Pause (0.86s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-859826 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.86s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.31s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-859826 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-859826 --output=json --layout=cluster: exit status 2 (314.099361ms)

                                                
                                                
-- stdout --
	{"Name":"pause-859826","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.30.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-859826","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.31s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.73s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-859826 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.73s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.94s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-859826 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.94s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.2s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-859826 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-859826 --alsologtostderr -v=5: (1.198590217s)
--- PASS: TestPause/serial/DeletePaused (1.20s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (30.9s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (30.904492819s)
--- PASS: TestPause/serial/VerifyDeletedResources (30.90s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (132.61s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-484901 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.27.2
E0524 19:30:33.044144   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/functional-420572/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-484901 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.27.2: (2m12.612964993s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (132.61s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.65s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-848884 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [2db161bd-9aee-4b78-b085-c1814bd13d30] Pending
helpers_test.go:344: "busybox" [2db161bd-9aee-4b78-b085-c1814bd13d30] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [2db161bd-9aee-4b78-b085-c1814bd13d30] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.032400154s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-848884 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.65s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.66s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-484901 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [ba68108b-19e2-466e-8117-ae6815e4892d] Pending
helpers_test.go:344: "busybox" [ba68108b-19e2-466e-8117-ae6815e4892d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [ba68108b-19e2-466e-8117-ae6815e4892d] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.055364431s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-484901 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.66s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-848884 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-848884 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.103647252s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-848884 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (92.38s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-848884 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-848884 --alsologtostderr -v=3: (1m32.378243744s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (92.38s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.39s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-484901 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-484901 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.258216361s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-484901 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.39s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (92.6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-484901 --alsologtostderr -v=3
E0524 19:33:45.591188   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/addons-934336/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-484901 --alsologtostderr -v=3: (1m32.601859662s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (92.60s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-848884 -n old-k8s-version-848884
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-848884 -n old-k8s-version-848884: exit status 7 (74.233565ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-848884 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (501.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-848884 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.16.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-848884 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.16.0: (8m20.936616446s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-848884 -n old-k8s-version-848884
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (501.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-484901 -n no-preload-484901
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-484901 -n no-preload-484901: exit status 7 (78.883193ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-484901 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E0524 19:33:48.773592   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/ingress-addon-legacy-889011/client.crt: no such file or directory
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (661.57s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-484901 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.27.2
E0524 19:34:02.539014   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/addons-934336/client.crt: no such file or directory
E0524 19:35:33.043957   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/functional-420572/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-484901 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.27.2: (11m1.272028145s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-484901 -n no-preload-484901
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (661.57s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-qz5sp" [01aca996-d9d9-4e12-8366-23fbd9b8f3d3] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.01864403s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-qz5sp" [01aca996-d9d9-4e12-8366-23fbd9b8f3d3] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.009153619s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-848884 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p old-k8s-version-848884 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.9s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-848884 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-848884 -n old-k8s-version-848884
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-848884 -n old-k8s-version-848884: exit status 2 (279.756808ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-848884 -n old-k8s-version-848884
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-848884 -n old-k8s-version-848884: exit status 2 (271.620708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-848884 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-848884 -n old-k8s-version-848884
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-848884 -n old-k8s-version-848884
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.90s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (108.72s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-638092 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.27.2
E0524 19:43:31.823442   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/ingress-addon-legacy-889011/client.crt: no such file or directory
E0524 19:43:48.772964   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/ingress-addon-legacy-889011/client.crt: no such file or directory
E0524 19:44:02.538752   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/addons-934336/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-638092 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.27.2: (1m48.721802334s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (108.72s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.61s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-638092 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [fd014af6-939a-4856-939f-86b4841ec321] Pending
helpers_test.go:344: "busybox" [fd014af6-939a-4856-939f-86b4841ec321] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [fd014af6-939a-4856-939f-86b4841ec321] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.020605033s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-638092 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.61s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-638092 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-638092 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.177842287s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-638092 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (91.96s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-638092 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-638092 --alsologtostderr -v=3: (1m31.957330349s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (91.96s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-bd8cg" [4854eab5-302e-43d8-b762-bdd9f96428bd] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.023602396s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-bd8cg" [4854eab5-302e-43d8-b762-bdd9f96428bd] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.009751739s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-484901 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p no-preload-484901 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.91s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-484901 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-484901 -n no-preload-484901
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-484901 -n no-preload-484901: exit status 2 (279.964106ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-484901 -n no-preload-484901
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-484901 -n no-preload-484901: exit status 2 (289.481292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-484901 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-484901 -n no-preload-484901
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-484901 -n no-preload-484901
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.91s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (69.35s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-004702 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.27.2
E0524 19:45:33.044012   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/functional-420572/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-004702 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.27.2: (1m9.346329934s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (69.35s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-638092 -n embed-certs-638092
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-638092 -n embed-certs-638092: exit status 7 (69.528895ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-638092 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.16s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (635.66s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-638092 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.27.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-638092 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.27.2: (10m35.307739564s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-638092 -n embed-certs-638092
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (635.66s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.64s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-004702 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-004702 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.639741709s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.64s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (7.14s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-004702 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-004702 --alsologtostderr -v=3: (7.138383171s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (7.14s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-004702 -n newest-cni-004702
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-004702 -n newest-cni-004702: exit status 7 (100.648253ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-004702 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (87.35s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-004702 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.27.2
E0524 19:47:03.098466   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/old-k8s-version-848884/client.crt: no such file or directory
E0524 19:47:03.103783   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/old-k8s-version-848884/client.crt: no such file or directory
E0524 19:47:03.114330   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/old-k8s-version-848884/client.crt: no such file or directory
E0524 19:47:03.137269   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/old-k8s-version-848884/client.crt: no such file or directory
E0524 19:47:03.178173   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/old-k8s-version-848884/client.crt: no such file or directory
E0524 19:47:03.259208   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/old-k8s-version-848884/client.crt: no such file or directory
E0524 19:47:03.419751   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/old-k8s-version-848884/client.crt: no such file or directory
E0524 19:47:03.740620   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/old-k8s-version-848884/client.crt: no such file or directory
E0524 19:47:04.381697   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/old-k8s-version-848884/client.crt: no such file or directory
E0524 19:47:05.662524   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/old-k8s-version-848884/client.crt: no such file or directory
E0524 19:47:06.351296   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/no-preload-484901/client.crt: no such file or directory
E0524 19:47:06.356593   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/no-preload-484901/client.crt: no such file or directory
E0524 19:47:06.366902   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/no-preload-484901/client.crt: no such file or directory
E0524 19:47:06.387166   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/no-preload-484901/client.crt: no such file or directory
E0524 19:47:06.427450   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/no-preload-484901/client.crt: no such file or directory
E0524 19:47:06.507587   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/no-preload-484901/client.crt: no such file or directory
E0524 19:47:06.667962   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/no-preload-484901/client.crt: no such file or directory
E0524 19:47:06.988844   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/no-preload-484901/client.crt: no such file or directory
E0524 19:47:07.629729   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/no-preload-484901/client.crt: no such file or directory
E0524 19:47:08.223317   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/old-k8s-version-848884/client.crt: no such file or directory
E0524 19:47:08.910602   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/no-preload-484901/client.crt: no such file or directory
E0524 19:47:11.471521   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/no-preload-484901/client.crt: no such file or directory
E0524 19:47:13.344401   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/old-k8s-version-848884/client.crt: no such file or directory
E0524 19:47:16.592275   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/no-preload-484901/client.crt: no such file or directory
E0524 19:47:23.585472   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/old-k8s-version-848884/client.crt: no such file or directory
E0524 19:47:26.832856   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/no-preload-484901/client.crt: no such file or directory
E0524 19:47:44.066086   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/old-k8s-version-848884/client.crt: no such file or directory
E0524 19:47:47.313498   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/no-preload-484901/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-004702 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.27.2: (1m26.937785906s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-004702 -n newest-cni-004702
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (87.35s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p newest-cni-004702 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.13s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-004702 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-004702 -n newest-cni-004702
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-004702 -n newest-cni-004702: exit status 2 (336.38286ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-004702 -n newest-cni-004702
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-004702 -n newest-cni-004702: exit status 2 (397.558343ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-004702 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-004702 -n newest-cni-004702
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-004702 -n newest-cni-004702
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (114.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-795515 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.27.2
E0524 19:48:25.027093   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/old-k8s-version-848884/client.crt: no such file or directory
E0524 19:48:28.274597   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/no-preload-484901/client.crt: no such file or directory
E0524 19:48:48.773666   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/ingress-addon-legacy-889011/client.crt: no such file or directory
E0524 19:49:02.538582   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/addons-934336/client.crt: no such file or directory
E0524 19:49:46.947489   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/old-k8s-version-848884/client.crt: no such file or directory
E0524 19:49:50.196041   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/no-preload-484901/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-795515 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.27.2: (1m54.197295219s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (114.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.73s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-795515 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [e28fbf41-3258-4e1e-8e96-cf6b17cc5b89] Pending
helpers_test.go:344: "busybox" [e28fbf41-3258-4e1e-8e96-cf6b17cc5b89] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [e28fbf41-3258-4e1e-8e96-cf6b17cc5b89] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.036731015s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-795515 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.73s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-795515 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-795515 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.111322721s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-795515 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (91.92s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-795515 --alsologtostderr -v=3
E0524 19:50:25.591781   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/addons-934336/client.crt: no such file or directory
E0524 19:50:33.044091   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/functional-420572/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-795515 --alsologtostderr -v=3: (1m31.924687664s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (91.92s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-795515 -n default-k8s-diff-port-795515
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-795515 -n default-k8s-diff-port-795515: exit status 7 (90.65827ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-795515 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (400.71s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-795515 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.27.2
E0524 19:52:03.098433   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/old-k8s-version-848884/client.crt: no such file or directory
E0524 19:52:06.351782   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/no-preload-484901/client.crt: no such file or directory
E0524 19:52:30.788169   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/old-k8s-version-848884/client.crt: no such file or directory
E0524 19:52:34.036316   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/no-preload-484901/client.crt: no such file or directory
E0524 19:53:48.773098   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/ingress-addon-legacy-889011/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-795515 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.27.2: (6m40.390031385s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-795515 -n default-k8s-diff-port-795515
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (400.71s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (74.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-amd64 start -p auto-036096 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=containerd
net_test.go:111: (dbg) Done: out/minikube-linux-amd64 start -p auto-036096 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=containerd: (1m14.210541666s)
--- PASS: TestNetworkPlugins/group/auto/Start (74.21s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.88s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:218: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-849274
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (83.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-036096 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=containerd
net_test.go:111: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-036096 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=containerd: (1m23.267271441s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (83.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-036096 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context auto-036096 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-bhtzd" [39c086a0-4de5-42db-b537-fd039f238391] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-bhtzd" [39c086a0-4de5-42db-b537-fd039f238391] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.008812364s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:174: (dbg) Run:  kubectl --context auto-036096 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:193: (dbg) Run:  kubectl --context auto-036096 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:248: (dbg) Run:  kubectl --context auto-036096 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (102.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-amd64 start -p calico-036096 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=containerd
net_test.go:111: (dbg) Done: out/minikube-linux-amd64 start -p calico-036096 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=containerd: (1m42.449246795s)
--- PASS: TestNetworkPlugins/group/calico/Start (102.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:119: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-58x4k" [0c30d5e4-2627-4cab-b1c7-e89a8c4f8131] Running
net_test.go:119: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.019290397s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-036096 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context kindnet-036096 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-g6thr" [54bd416e-8996-41ff-9244-3520ff939123] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-g6thr" [54bd416e-8996-41ff-9244-3520ff939123] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.012503924s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:174: (dbg) Run:  kubectl --context kindnet-036096 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:193: (dbg) Run:  kubectl --context kindnet-036096 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:248: (dbg) Run:  kubectl --context kindnet-036096 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (94s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-036096 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=containerd
net_test.go:111: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-036096 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=containerd: (1m33.995914479s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (94.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-lkpjj" [34078c51-3304-4460-b818-83bbb680ef15] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.020639396s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-lkpjj" [34078c51-3304-4460-b818-83bbb680ef15] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.010197223s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-638092 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p embed-certs-638092 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.39s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-638092 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-638092 -n embed-certs-638092
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-638092 -n embed-certs-638092: exit status 2 (243.360868ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-638092 -n embed-certs-638092
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-638092 -n embed-certs-638092: exit status 2 (238.769384ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-638092 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-638092 -n embed-certs-638092
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-638092 -n embed-certs-638092
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (107.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-036096 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=containerd
E0524 19:56:56.095456   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/functional-420572/client.crt: no such file or directory
E0524 19:57:03.098640   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/old-k8s-version-848884/client.crt: no such file or directory
E0524 19:57:06.351272   79153 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/no-preload-484901/client.crt: no such file or directory
net_test.go:111: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-036096 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=containerd: (1m47.545681375s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (107.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:119: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-v9bdp" [0e705923-834b-47a8-bef7-f90345016c5f] Running
net_test.go:119: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.023424111s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-036096 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context calico-036096 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-dwgwj" [a5c94c41-1044-41c6-b26b-fc8eafe4715e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-dwgwj" [a5c94c41-1044-41c6-b26b-fc8eafe4715e] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.011603531s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:174: (dbg) Run:  kubectl --context calico-036096 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:193: (dbg) Run:  kubectl --context calico-036096 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:248: (dbg) Run:  kubectl --context calico-036096 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-036096 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context custom-flannel-036096 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-4224g" [fdc68a04-9431-4b3e-9cc1-8a59489c5bdd] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-4224g" [fdc68a04-9431-4b3e-9cc1-8a59489c5bdd] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.008647996s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (89.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-036096 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=containerd
net_test.go:111: (dbg) Done: out/minikube-linux-amd64 start -p flannel-036096 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=containerd: (1m29.627424334s)
--- PASS: TestNetworkPlugins/group/flannel/Start (89.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:174: (dbg) Run:  kubectl --context custom-flannel-036096 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:193: (dbg) Run:  kubectl --context custom-flannel-036096 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:248: (dbg) Run:  kubectl --context custom-flannel-036096 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (16.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-9dzwf" [85bf88c8-bab5-4696-b1b9-4969484eb180] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-9dzwf" [85bf88c8-bab5-4696-b1b9-4969484eb180] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 16.018812182s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (16.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (73.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-036096 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=containerd
net_test.go:111: (dbg) Done: out/minikube-linux-amd64 start -p bridge-036096 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=containerd: (1m13.845580231s)
--- PASS: TestNetworkPlugins/group/bridge/Start (73.85s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-9dzwf" [85bf88c8-bab5-4696-b1b9-4969484eb180] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.009405912s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-795515 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-036096 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (15.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context enable-default-cni-036096 replace --force -f testdata/netcat-deployment.yaml
net_test.go:148: (dbg) Done: kubectl --context enable-default-cni-036096 replace --force -f testdata/netcat-deployment.yaml: (2.435077687s)
net_test.go:162: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-qn9cs" [ae19af73-7aee-40d0-a2e1-70a3765684e0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-qn9cs" [ae19af73-7aee-40d0-a2e1-70a3765684e0] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 13.008390316s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (15.62s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p default-k8s-diff-port-795515 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.8s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-795515 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-795515 -n default-k8s-diff-port-795515
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-795515 -n default-k8s-diff-port-795515: exit status 2 (270.854328ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-795515 -n default-k8s-diff-port-795515
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-795515 -n default-k8s-diff-port-795515: exit status 2 (272.764383ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-795515 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-795515 -n default-k8s-diff-port-795515
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-795515 -n default-k8s-diff-port-795515
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:174: (dbg) Run:  kubectl --context enable-default-cni-036096 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:193: (dbg) Run:  kubectl --context enable-default-cni-036096 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:248: (dbg) Run:  kubectl --context enable-default-cni-036096 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:119: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-8vllh" [06649c89-02a0-4e92-b1f5-f1baf46cdc2c] Running
net_test.go:119: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.016923198s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-036096 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context flannel-036096 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-nnvk8" [88ffa1df-6e8f-4973-9fd1-2f0b92985969] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-nnvk8" [88ffa1df-6e8f-4973-9fd1-2f0b92985969] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.00667285s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-036096 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context bridge-036096 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-lpnf7" [c18581cf-f46c-45ff-b414-9ad869d7c3da] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-lpnf7" [c18581cf-f46c-45ff-b414-9ad869d7c3da] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.017175608s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:174: (dbg) Run:  kubectl --context bridge-036096 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:193: (dbg) Run:  kubectl --context bridge-036096 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:248: (dbg) Run:  kubectl --context bridge-036096 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:174: (dbg) Run:  kubectl --context flannel-036096 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:193: (dbg) Run:  kubectl --context flannel-036096 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:248: (dbg) Run:  kubectl --context flannel-036096 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                    

Test skip (35/300)

Order skiped test Duration
5 TestDownloadOnly/v1.16.0/cached-images 0
6 TestDownloadOnly/v1.16.0/binaries 0
7 TestDownloadOnly/v1.16.0/kubectl 0
12 TestDownloadOnly/v1.27.2/cached-images 0
13 TestDownloadOnly/v1.27.2/binaries 0
14 TestDownloadOnly/v1.27.2/kubectl 0
18 TestDownloadOnlyKic 0
29 TestAddons/parallel/Olm 0
39 TestDockerFlags 0
43 TestHyperKitDriverInstallOrUpdate 0
44 TestHyperkitDriverSkipUpgrade 0
94 TestFunctional/parallel/DockerEnv 0
95 TestFunctional/parallel/PodmanEnv 0
103 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
104 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
105 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
106 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
107 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
108 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
109 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
110 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
143 TestGvisorAddon 0
144 TestImageBuild 0
177 TestKicCustomNetwork 0
178 TestKicExistingNetwork 0
179 TestKicCustomSubnet 0
180 TestKicStaticIP 0
211 TestChangeNoneUser 0
214 TestScheduledStopWindows 0
216 TestSkaffold 0
218 TestInsufficientStorage 0
222 TestMissingContainerUpgrade 0
232 TestStartStop/group/disable-driver-mounts 0.16
237 TestNetworkPlugins/group/kubenet 3.65
245 TestNetworkPlugins/group/cilium 3.53
x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:152: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.2/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.27.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.2/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.27.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.2/kubectl
aaa_download_only_test.go:152: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.27.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:210: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:474: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:35: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:458: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:545: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:296: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-398140" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-398140
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:92: Skipping the test as containerd container runtimes requires CNI
panic.go:522: 
----------------------- debugLogs start: kubenet-036096 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-036096

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-036096

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-036096

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-036096

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-036096

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-036096

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-036096

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-036096

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-036096

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-036096

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-036096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-036096"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-036096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-036096"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-036096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-036096"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-036096

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-036096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-036096"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-036096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-036096"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-036096" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-036096" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-036096" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-036096" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-036096" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-036096" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-036096" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-036096" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-036096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-036096"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-036096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-036096"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-036096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-036096"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-036096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-036096"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-036096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-036096"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-036096" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-036096" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-036096" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-036096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-036096"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-036096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-036096"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-036096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-036096"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-036096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-036096"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-036096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-036096"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/16573-71939/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 24 May 2023 19:24:53 UTC
provider: minikube.sigs.k8s.io
version: v1.30.1
name: cluster_info
server: https://192.168.50.206:8443
name: kubernetes-upgrade-043575
contexts:
- context:
cluster: kubernetes-upgrade-043575
user: kubernetes-upgrade-043575
name: kubernetes-upgrade-043575
current-context: kubernetes-upgrade-043575
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-043575
user:
client-certificate: /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/kubernetes-upgrade-043575/client.crt
client-key: /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/kubernetes-upgrade-043575/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-036096

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-036096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-036096"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-036096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-036096"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-036096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-036096"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-036096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-036096"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-036096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-036096"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-036096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-036096"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-036096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-036096"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-036096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-036096"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-036096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-036096"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-036096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-036096"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-036096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-036096"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-036096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-036096"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-036096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-036096"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-036096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-036096"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-036096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-036096"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-036096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-036096"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-036096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-036096"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-036096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-036096"

                                                
                                                
----------------------- debugLogs end: kubenet-036096 [took: 3.491219412s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-036096" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-036096
--- SKIP: TestNetworkPlugins/group/kubenet (3.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:101: Skipping the test as it's interfering with other tests and is outdated
panic.go:522: 
----------------------- debugLogs start: cilium-036096 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-036096

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-036096

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-036096

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-036096

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-036096

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-036096

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-036096

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-036096

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-036096

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-036096

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-036096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-036096"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-036096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-036096"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-036096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-036096"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-036096

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-036096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-036096"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-036096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-036096"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-036096" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-036096" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-036096" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-036096" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-036096" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-036096" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-036096" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-036096" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-036096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-036096"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-036096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-036096"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-036096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-036096"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-036096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-036096"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-036096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-036096"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-036096

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-036096

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-036096" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-036096" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-036096

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-036096

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-036096" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-036096" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-036096" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-036096" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-036096" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-036096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-036096"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-036096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-036096"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-036096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-036096"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-036096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-036096"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-036096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-036096"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/16573-71939/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 24 May 2023 19:24:53 UTC
provider: minikube.sigs.k8s.io
version: v1.30.1
name: cluster_info
server: https://192.168.50.206:8443
name: kubernetes-upgrade-043575
contexts:
- context:
cluster: kubernetes-upgrade-043575
user: kubernetes-upgrade-043575
name: kubernetes-upgrade-043575
current-context: kubernetes-upgrade-043575
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-043575
user:
client-certificate: /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/kubernetes-upgrade-043575/client.crt
client-key: /home/jenkins/minikube-integration/16573-71939/.minikube/profiles/kubernetes-upgrade-043575/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-036096

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-036096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-036096"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-036096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-036096"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-036096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-036096"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-036096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-036096"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-036096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-036096"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-036096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-036096"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-036096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-036096"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-036096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-036096"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-036096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-036096"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-036096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-036096"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-036096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-036096"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-036096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-036096"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-036096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-036096"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-036096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-036096"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-036096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-036096"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-036096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-036096"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-036096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-036096"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-036096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-036096"

                                                
                                                
----------------------- debugLogs end: cilium-036096 [took: 3.365380604s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-036096" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-036096
--- SKIP: TestNetworkPlugins/group/cilium (3.53s)

                                                
                                    
Copied to clipboard