Test Report: KVM_Linux_containerd 15242

                    
                      45262e5daa3ddfe0c6cdcb881d2af1d3532e9ce3:2022-10-31:26351
                    
                

Test fail (3/296)

Order failed test Duration
74 TestFunctional/parallel/ConfigCmd 0.55
201 TestPreload 161.76
233 TestStoppedBinaryUpgrade/Upgrade 186.62
x
+
TestFunctional/parallel/ConfigCmd (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1192: (dbg) Run:  out/minikube-linux-amd64 -p functional-180719 config unset cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1192: (dbg) Run:  out/minikube-linux-amd64 -p functional-180719 config get cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1192: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-180719 config get cpus: exit status 14 (96.813947ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1192: (dbg) Run:  out/minikube-linux-amd64 -p functional-180719 config set cpus 2

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1203: expected config error for "out/minikube-linux-amd64 -p functional-180719 config set cpus 2" to be -"! These changes will take effect upon a minikube delete and then a minikube start"- but got *"! These changes will take effect upon a minikube delete and then a minikube start\nE1031 18:09:36.161935  489608 root.go:91] failed to log command end to audit: failed to find a log row with id equals to ea8c3fd9-020c-4f8a-acc0-691e91713d6b"*
functional_test.go:1192: (dbg) Run:  out/minikube-linux-amd64 -p functional-180719 config get cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1192: (dbg) Run:  out/minikube-linux-amd64 -p functional-180719 config unset cpus
functional_test.go:1192: (dbg) Run:  out/minikube-linux-amd64 -p functional-180719 config get cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1192: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-180719 config get cpus: exit status 14 (79.720754ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ConfigCmd (0.55s)

                                                
                                    
x
+
TestPreload (161.76s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-184006 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.24.4
E1031 18:41:17.540260  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/addons-180051/client.crt: no such file or directory
E1031 18:41:57.675733  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/ingress-addon-legacy-181025/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-184006 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.24.4: (1m54.763922899s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 ssh -p test-preload-184006 -- sudo crictl pull gcr.io/k8s-minikube/busybox
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 ssh -p test-preload-184006 -- sudo crictl pull gcr.io/k8s-minikube/busybox: (1.730011807s)
preload_test.go:67: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-184006 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.24.6
preload_test.go:67: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-184006 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.24.6: (41.986496073s)
preload_test.go:76: (dbg) Run:  out/minikube-linux-amd64 ssh -p test-preload-184006 -- sudo crictl image ls
preload_test.go:81: Expected to find gcr.io/k8s-minikube/busybox in output of `docker images`, instead got 
-- stdout --
	IMAGE                                     TAG                  IMAGE ID            SIZE
	docker.io/kindest/kindnetd                v20220726-ed811e41   d921cee849482       25.8MB
	gcr.io/k8s-minikube/storage-provisioner   v5                   6e38f40d628db       9.06MB
	k8s.gcr.io/coredns/coredns                v1.8.6               a4ca41631cc7a       13.6MB
	k8s.gcr.io/etcd                           3.5.3-0              aebe758cef4cd       102MB
	k8s.gcr.io/kube-apiserver                 v1.24.6              860f263331c95       33.8MB
	k8s.gcr.io/kube-controller-manager        v1.24.6              c6c20157a4233       31MB
	k8s.gcr.io/kube-proxy                     v1.24.4              7a53d1e08ef58       39.5MB
	k8s.gcr.io/kube-proxy                     v1.24.6              0bb39497ab33b       39.5MB
	k8s.gcr.io/kube-scheduler                 v1.24.6              c786c777a4e1c       15.5MB
	k8s.gcr.io/pause                          3.7                  221177c6082a8       311kB

                                                
                                                
-- /stdout --
panic.go:522: *** TestPreload FAILED at 2022-10-31 18:42:45.70559404 +0000 UTC m=+2531.564030081
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-184006 -n test-preload-184006
helpers_test.go:244: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-184006 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p test-preload-184006 logs -n 25: (1.101368644s)
helpers_test.go:252: TestPreload logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| cp      | multinode-181746 cp multinode-181746-m03:/home/docker/cp-test.txt                       | multinode-181746     | jenkins | v1.27.1 | 31 Oct 22 18:21 UTC | 31 Oct 22 18:21 UTC |
	|         | multinode-181746:/home/docker/cp-test_multinode-181746-m03_multinode-181746.txt         |                      |         |         |                     |                     |
	| ssh     | multinode-181746 ssh -n                                                                 | multinode-181746     | jenkins | v1.27.1 | 31 Oct 22 18:21 UTC | 31 Oct 22 18:21 UTC |
	|         | multinode-181746-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-181746 ssh -n multinode-181746 sudo cat                                       | multinode-181746     | jenkins | v1.27.1 | 31 Oct 22 18:21 UTC | 31 Oct 22 18:21 UTC |
	|         | /home/docker/cp-test_multinode-181746-m03_multinode-181746.txt                          |                      |         |         |                     |                     |
	| cp      | multinode-181746 cp multinode-181746-m03:/home/docker/cp-test.txt                       | multinode-181746     | jenkins | v1.27.1 | 31 Oct 22 18:21 UTC | 31 Oct 22 18:21 UTC |
	|         | multinode-181746-m02:/home/docker/cp-test_multinode-181746-m03_multinode-181746-m02.txt |                      |         |         |                     |                     |
	| ssh     | multinode-181746 ssh -n                                                                 | multinode-181746     | jenkins | v1.27.1 | 31 Oct 22 18:21 UTC | 31 Oct 22 18:21 UTC |
	|         | multinode-181746-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-181746 ssh -n multinode-181746-m02 sudo cat                                   | multinode-181746     | jenkins | v1.27.1 | 31 Oct 22 18:21 UTC | 31 Oct 22 18:21 UTC |
	|         | /home/docker/cp-test_multinode-181746-m03_multinode-181746-m02.txt                      |                      |         |         |                     |                     |
	| node    | multinode-181746 node stop m03                                                          | multinode-181746     | jenkins | v1.27.1 | 31 Oct 22 18:21 UTC | 31 Oct 22 18:21 UTC |
	| node    | multinode-181746 node start                                                             | multinode-181746     | jenkins | v1.27.1 | 31 Oct 22 18:21 UTC | 31 Oct 22 18:22 UTC |
	|         | m03 --alsologtostderr                                                                   |                      |         |         |                     |                     |
	| node    | list -p multinode-181746                                                                | multinode-181746     | jenkins | v1.27.1 | 31 Oct 22 18:22 UTC |                     |
	| stop    | -p multinode-181746                                                                     | multinode-181746     | jenkins | v1.27.1 | 31 Oct 22 18:22 UTC | 31 Oct 22 18:25 UTC |
	| start   | -p multinode-181746                                                                     | multinode-181746     | jenkins | v1.27.1 | 31 Oct 22 18:25 UTC | 31 Oct 22 18:31 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	| node    | list -p multinode-181746                                                                | multinode-181746     | jenkins | v1.27.1 | 31 Oct 22 18:31 UTC |                     |
	| node    | multinode-181746 node delete                                                            | multinode-181746     | jenkins | v1.27.1 | 31 Oct 22 18:31 UTC | 31 Oct 22 18:31 UTC |
	|         | m03                                                                                     |                      |         |         |                     |                     |
	| stop    | multinode-181746 stop                                                                   | multinode-181746     | jenkins | v1.27.1 | 31 Oct 22 18:31 UTC | 31 Oct 22 18:34 UTC |
	| start   | -p multinode-181746                                                                     | multinode-181746     | jenkins | v1.27.1 | 31 Oct 22 18:34 UTC | 31 Oct 22 18:39 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=containerd                                                          |                      |         |         |                     |                     |
	| node    | list -p multinode-181746                                                                | multinode-181746     | jenkins | v1.27.1 | 31 Oct 22 18:39 UTC |                     |
	| start   | -p multinode-181746-m02                                                                 | multinode-181746-m02 | jenkins | v1.27.1 | 31 Oct 22 18:39 UTC |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=containerd                                                          |                      |         |         |                     |                     |
	| start   | -p multinode-181746-m03                                                                 | multinode-181746-m03 | jenkins | v1.27.1 | 31 Oct 22 18:39 UTC | 31 Oct 22 18:40 UTC |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=containerd                                                          |                      |         |         |                     |                     |
	| node    | add -p multinode-181746                                                                 | multinode-181746     | jenkins | v1.27.1 | 31 Oct 22 18:40 UTC |                     |
	| delete  | -p multinode-181746-m03                                                                 | multinode-181746-m03 | jenkins | v1.27.1 | 31 Oct 22 18:40 UTC | 31 Oct 22 18:40 UTC |
	| delete  | -p multinode-181746                                                                     | multinode-181746     | jenkins | v1.27.1 | 31 Oct 22 18:40 UTC | 31 Oct 22 18:40 UTC |
	| start   | -p test-preload-184006                                                                  | test-preload-184006  | jenkins | v1.27.1 | 31 Oct 22 18:40 UTC | 31 Oct 22 18:42 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                                                           |                      |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=containerd                                                          |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.4                                                            |                      |         |         |                     |                     |
	| ssh     | -p test-preload-184006                                                                  | test-preload-184006  | jenkins | v1.27.1 | 31 Oct 22 18:42 UTC | 31 Oct 22 18:42 UTC |
	|         | -- sudo crictl pull                                                                     |                      |         |         |                     |                     |
	|         | gcr.io/k8s-minikube/busybox                                                             |                      |         |         |                     |                     |
	| start   | -p test-preload-184006                                                                  | test-preload-184006  | jenkins | v1.27.1 | 31 Oct 22 18:42 UTC | 31 Oct 22 18:42 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                  |                      |         |         |                     |                     |
	|         | --wait=true --driver=kvm2                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=containerd                                                          |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.6                                                            |                      |         |         |                     |                     |
	| ssh     | -p test-preload-184006 -- sudo                                                          | test-preload-184006  | jenkins | v1.27.1 | 31 Oct 22 18:42 UTC | 31 Oct 22 18:42 UTC |
	|         | crictl image ls                                                                         |                      |         |         |                     |                     |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/10/31 18:42:03
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.19.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1031 18:42:03.544771  499771 out.go:296] Setting OutFile to fd 1 ...
	I1031 18:42:03.544889  499771 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 18:42:03.544899  499771 out.go:309] Setting ErrFile to fd 2...
	I1031 18:42:03.544904  499771 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 18:42:03.545031  499771 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15242-478932/.minikube/bin
	I1031 18:42:03.545568  499771 out.go:303] Setting JSON to false
	I1031 18:42:03.546506  499771 start.go:116] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":8677,"bootTime":1667233047,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1021-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1031 18:42:03.546596  499771 start.go:126] virtualization: kvm guest
	I1031 18:42:03.549375  499771 out.go:177] * [test-preload-184006] minikube v1.27.1 on Ubuntu 20.04 (kvm/amd64)
	I1031 18:42:03.551016  499771 notify.go:220] Checking for updates...
	I1031 18:42:03.552561  499771 out.go:177]   - MINIKUBE_LOCATION=15242
	I1031 18:42:03.554182  499771 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1031 18:42:03.555589  499771 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/15242-478932/kubeconfig
	I1031 18:42:03.557175  499771 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/15242-478932/.minikube
	I1031 18:42:03.558812  499771 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1031 18:42:03.560634  499771 config.go:180] Loaded profile config "test-preload-184006": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.24.4
	I1031 18:42:03.561009  499771 main.go:134] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1031 18:42:03.561062  499771 main.go:134] libmachine: Launching plugin server for driver kvm2
	I1031 18:42:03.576879  499771 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:32827
	I1031 18:42:03.577281  499771 main.go:134] libmachine: () Calling .GetVersion
	I1031 18:42:03.577805  499771 main.go:134] libmachine: Using API Version  1
	I1031 18:42:03.577828  499771 main.go:134] libmachine: () Calling .SetConfigRaw
	I1031 18:42:03.578206  499771 main.go:134] libmachine: () Calling .GetMachineName
	I1031 18:42:03.578384  499771 main.go:134] libmachine: (test-preload-184006) Calling .DriverName
	I1031 18:42:03.580420  499771 out.go:177] * Kubernetes 1.25.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.25.3
	I1031 18:42:03.581912  499771 driver.go:365] Setting default libvirt URI to qemu:///system
	I1031 18:42:03.582289  499771 main.go:134] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1031 18:42:03.582349  499771 main.go:134] libmachine: Launching plugin server for driver kvm2
	I1031 18:42:03.597743  499771 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:41749
	I1031 18:42:03.598155  499771 main.go:134] libmachine: () Calling .GetVersion
	I1031 18:42:03.598648  499771 main.go:134] libmachine: Using API Version  1
	I1031 18:42:03.598679  499771 main.go:134] libmachine: () Calling .SetConfigRaw
	I1031 18:42:03.599031  499771 main.go:134] libmachine: () Calling .GetMachineName
	I1031 18:42:03.599244  499771 main.go:134] libmachine: (test-preload-184006) Calling .DriverName
	I1031 18:42:03.635911  499771 out.go:177] * Using the kvm2 driver based on existing profile
	I1031 18:42:03.637437  499771 start.go:282] selected driver: kvm2
	I1031 18:42:03.637461  499771 start.go:808] validating driver "kvm2" against &{Name:test-preload-184006 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15159/minikube-v1.27.0-1666206003-15159-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-184006 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.32 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/mini
kube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1031 18:42:03.637591  499771 start.go:819] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1031 18:42:03.638251  499771 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1031 18:42:03.638472  499771 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/15242-478932/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1031 18:42:03.653579  499771 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.27.1
	I1031 18:42:03.653902  499771 start_flags.go:888] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1031 18:42:03.653938  499771 cni.go:95] Creating CNI manager for ""
	I1031 18:42:03.653954  499771 cni.go:165] "kvm2" driver + containerd runtime found, recommending bridge
	I1031 18:42:03.653962  499771 start_flags.go:317] config:
	{Name:test-preload-184006 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15159/minikube-v1.27.0-1666206003-15159-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.6 ClusterName:test-preload-184006 Namespace:defaul
t APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.32 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 M
ountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1031 18:42:03.654065  499771 iso.go:124] acquiring lock: {Name:mk75bc6a3e159cb2de2b5f76a06013b9e3e93a7b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1031 18:42:03.656139  499771 out.go:177] * Starting control plane node test-preload-184006 in cluster test-preload-184006
	I1031 18:42:03.657567  499771 preload.go:132] Checking if preload exists for k8s version v1.24.6 and runtime containerd
	I1031 18:42:03.774262  499771 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.6/preloaded-images-k8s-v18-v1.24.6-containerd-overlay2-amd64.tar.lz4
	I1031 18:42:03.774310  499771 cache.go:57] Caching tarball of preloaded images
	I1031 18:42:03.774557  499771 preload.go:132] Checking if preload exists for k8s version v1.24.6 and runtime containerd
	I1031 18:42:03.776826  499771 out.go:177] * Downloading Kubernetes v1.24.6 preload ...
	I1031 18:42:03.778677  499771 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.24.6-containerd-overlay2-amd64.tar.lz4 ...
	I1031 18:42:03.894501  499771 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.6/preloaded-images-k8s-v18-v1.24.6-containerd-overlay2-amd64.tar.lz4?checksum=md5:0de094b674a9198bc47721c3b23603d5 -> /home/jenkins/minikube-integration/15242-478932/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.6-containerd-overlay2-amd64.tar.lz4
	I1031 18:42:06.916165  499771 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.24.6-containerd-overlay2-amd64.tar.lz4 ...
	I1031 18:42:06.916271  499771 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/15242-478932/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.6-containerd-overlay2-amd64.tar.lz4 ...
	I1031 18:42:07.785683  499771 cache.go:60] Finished verifying existence of preloaded tar for  v1.24.6 on containerd
	I1031 18:42:07.785858  499771 profile.go:148] Saving config to /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/test-preload-184006/config.json ...
	I1031 18:42:07.786049  499771 cache.go:208] Successfully downloaded all kic artifacts
	I1031 18:42:07.786081  499771 start.go:364] acquiring machines lock for test-preload-184006: {Name:mk0dc8caaddb19c1678d177d08b6bcca15077d40 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1031 18:42:07.786164  499771 start.go:368] acquired machines lock for "test-preload-184006" in 67.337µs
	I1031 18:42:07.786180  499771 start.go:96] Skipping create...Using existing machine configuration
	I1031 18:42:07.786185  499771 fix.go:55] fixHost starting: 
	I1031 18:42:07.786429  499771 main.go:134] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1031 18:42:07.786465  499771 main.go:134] libmachine: Launching plugin server for driver kvm2
	I1031 18:42:07.801711  499771 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:40859
	I1031 18:42:07.802175  499771 main.go:134] libmachine: () Calling .GetVersion
	I1031 18:42:07.802669  499771 main.go:134] libmachine: Using API Version  1
	I1031 18:42:07.802696  499771 main.go:134] libmachine: () Calling .SetConfigRaw
	I1031 18:42:07.802989  499771 main.go:134] libmachine: () Calling .GetMachineName
	I1031 18:42:07.803185  499771 main.go:134] libmachine: (test-preload-184006) Calling .DriverName
	I1031 18:42:07.803399  499771 main.go:134] libmachine: (test-preload-184006) Calling .GetState
	I1031 18:42:07.804895  499771 fix.go:103] recreateIfNeeded on test-preload-184006: state=Running err=<nil>
	W1031 18:42:07.804914  499771 fix.go:129] unexpected machine state, will restart: <nil>
	I1031 18:42:07.806890  499771 out.go:177] * Updating the running kvm2 "test-preload-184006" VM ...
	I1031 18:42:07.808467  499771 machine.go:88] provisioning docker machine ...
	I1031 18:42:07.808493  499771 main.go:134] libmachine: (test-preload-184006) Calling .DriverName
	I1031 18:42:07.808704  499771 main.go:134] libmachine: (test-preload-184006) Calling .GetMachineName
	I1031 18:42:07.808867  499771 buildroot.go:166] provisioning hostname "test-preload-184006"
	I1031 18:42:07.808888  499771 main.go:134] libmachine: (test-preload-184006) Calling .GetMachineName
	I1031 18:42:07.809057  499771 main.go:134] libmachine: (test-preload-184006) Calling .GetSSHHostname
	I1031 18:42:07.811427  499771 main.go:134] libmachine: (test-preload-184006) DBG | domain test-preload-184006 has defined MAC address 52:54:00:b7:f5:32 in network mk-test-preload-184006
	I1031 18:42:07.811900  499771 main.go:134] libmachine: (test-preload-184006) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:f5:32", ip: ""} in network mk-test-preload-184006: {Iface:virbr1 ExpiryTime:2022-10-31 19:40:22 +0000 UTC Type:0 Mac:52:54:00:b7:f5:32 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:test-preload-184006 Clientid:01:52:54:00:b7:f5:32}
	I1031 18:42:07.811928  499771 main.go:134] libmachine: (test-preload-184006) DBG | domain test-preload-184006 has defined IP address 192.168.39.32 and MAC address 52:54:00:b7:f5:32 in network mk-test-preload-184006
	I1031 18:42:07.812095  499771 main.go:134] libmachine: (test-preload-184006) Calling .GetSSHPort
	I1031 18:42:07.812261  499771 main.go:134] libmachine: (test-preload-184006) Calling .GetSSHKeyPath
	I1031 18:42:07.812410  499771 main.go:134] libmachine: (test-preload-184006) Calling .GetSSHKeyPath
	I1031 18:42:07.812551  499771 main.go:134] libmachine: (test-preload-184006) Calling .GetSSHUsername
	I1031 18:42:07.812735  499771 main.go:134] libmachine: Using SSH client type: native
	I1031 18:42:07.812893  499771 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ed4e0] 0x7f0660 <nil>  [] 0s} 192.168.39.32 22 <nil> <nil>}
	I1031 18:42:07.812933  499771 main.go:134] libmachine: About to run SSH command:
	sudo hostname test-preload-184006 && echo "test-preload-184006" | sudo tee /etc/hostname
	I1031 18:42:07.933390  499771 main.go:134] libmachine: SSH cmd err, output: <nil>: test-preload-184006
	
	I1031 18:42:07.933427  499771 main.go:134] libmachine: (test-preload-184006) Calling .GetSSHHostname
	I1031 18:42:07.936419  499771 main.go:134] libmachine: (test-preload-184006) DBG | domain test-preload-184006 has defined MAC address 52:54:00:b7:f5:32 in network mk-test-preload-184006
	I1031 18:42:07.936800  499771 main.go:134] libmachine: (test-preload-184006) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:f5:32", ip: ""} in network mk-test-preload-184006: {Iface:virbr1 ExpiryTime:2022-10-31 19:40:22 +0000 UTC Type:0 Mac:52:54:00:b7:f5:32 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:test-preload-184006 Clientid:01:52:54:00:b7:f5:32}
	I1031 18:42:07.936834  499771 main.go:134] libmachine: (test-preload-184006) DBG | domain test-preload-184006 has defined IP address 192.168.39.32 and MAC address 52:54:00:b7:f5:32 in network mk-test-preload-184006
	I1031 18:42:07.937000  499771 main.go:134] libmachine: (test-preload-184006) Calling .GetSSHPort
	I1031 18:42:07.937203  499771 main.go:134] libmachine: (test-preload-184006) Calling .GetSSHKeyPath
	I1031 18:42:07.937404  499771 main.go:134] libmachine: (test-preload-184006) Calling .GetSSHKeyPath
	I1031 18:42:07.937567  499771 main.go:134] libmachine: (test-preload-184006) Calling .GetSSHUsername
	I1031 18:42:07.937737  499771 main.go:134] libmachine: Using SSH client type: native
	I1031 18:42:07.937903  499771 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ed4e0] 0x7f0660 <nil>  [] 0s} 192.168.39.32 22 <nil> <nil>}
	I1031 18:42:07.937931  499771 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-184006' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-184006/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-184006' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1031 18:42:08.046889  499771 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I1031 18:42:08.046923  499771 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/15242-478932/.minikube CaCertPath:/home/jenkins/minikube-integration/15242-478932/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/15242-478932/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/15242-478932/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/15242-478932/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/15242-478932/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/15242-478932/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/15242-478932/.minikube}
	I1031 18:42:08.046938  499771 buildroot.go:174] setting up certificates
	I1031 18:42:08.046948  499771 provision.go:83] configureAuth start
	I1031 18:42:08.046959  499771 main.go:134] libmachine: (test-preload-184006) Calling .GetMachineName
	I1031 18:42:08.047270  499771 main.go:134] libmachine: (test-preload-184006) Calling .GetIP
	I1031 18:42:08.050073  499771 main.go:134] libmachine: (test-preload-184006) DBG | domain test-preload-184006 has defined MAC address 52:54:00:b7:f5:32 in network mk-test-preload-184006
	I1031 18:42:08.050500  499771 main.go:134] libmachine: (test-preload-184006) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:f5:32", ip: ""} in network mk-test-preload-184006: {Iface:virbr1 ExpiryTime:2022-10-31 19:40:22 +0000 UTC Type:0 Mac:52:54:00:b7:f5:32 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:test-preload-184006 Clientid:01:52:54:00:b7:f5:32}
	I1031 18:42:08.050559  499771 main.go:134] libmachine: (test-preload-184006) DBG | domain test-preload-184006 has defined IP address 192.168.39.32 and MAC address 52:54:00:b7:f5:32 in network mk-test-preload-184006
	I1031 18:42:08.050691  499771 main.go:134] libmachine: (test-preload-184006) Calling .GetSSHHostname
	I1031 18:42:08.053002  499771 main.go:134] libmachine: (test-preload-184006) DBG | domain test-preload-184006 has defined MAC address 52:54:00:b7:f5:32 in network mk-test-preload-184006
	I1031 18:42:08.053387  499771 main.go:134] libmachine: (test-preload-184006) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:f5:32", ip: ""} in network mk-test-preload-184006: {Iface:virbr1 ExpiryTime:2022-10-31 19:40:22 +0000 UTC Type:0 Mac:52:54:00:b7:f5:32 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:test-preload-184006 Clientid:01:52:54:00:b7:f5:32}
	I1031 18:42:08.053421  499771 main.go:134] libmachine: (test-preload-184006) DBG | domain test-preload-184006 has defined IP address 192.168.39.32 and MAC address 52:54:00:b7:f5:32 in network mk-test-preload-184006
	I1031 18:42:08.053554  499771 provision.go:138] copyHostCerts
	I1031 18:42:08.053628  499771 exec_runner.go:144] found /home/jenkins/minikube-integration/15242-478932/.minikube/ca.pem, removing ...
	I1031 18:42:08.053643  499771 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15242-478932/.minikube/ca.pem
	I1031 18:42:08.053707  499771 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15242-478932/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/15242-478932/.minikube/ca.pem (1078 bytes)
	I1031 18:42:08.053784  499771 exec_runner.go:144] found /home/jenkins/minikube-integration/15242-478932/.minikube/cert.pem, removing ...
	I1031 18:42:08.053793  499771 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15242-478932/.minikube/cert.pem
	I1031 18:42:08.053820  499771 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15242-478932/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/15242-478932/.minikube/cert.pem (1123 bytes)
	I1031 18:42:08.053870  499771 exec_runner.go:144] found /home/jenkins/minikube-integration/15242-478932/.minikube/key.pem, removing ...
	I1031 18:42:08.053880  499771 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15242-478932/.minikube/key.pem
	I1031 18:42:08.053904  499771 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15242-478932/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/15242-478932/.minikube/key.pem (1675 bytes)
	I1031 18:42:08.053942  499771 provision.go:112] generating server cert: /home/jenkins/minikube-integration/15242-478932/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/15242-478932/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/15242-478932/.minikube/certs/ca-key.pem org=jenkins.test-preload-184006 san=[192.168.39.32 192.168.39.32 localhost 127.0.0.1 minikube test-preload-184006]
	I1031 18:42:08.187456  499771 provision.go:172] copyRemoteCerts
	I1031 18:42:08.187507  499771 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1031 18:42:08.187531  499771 main.go:134] libmachine: (test-preload-184006) Calling .GetSSHHostname
	I1031 18:42:08.190193  499771 main.go:134] libmachine: (test-preload-184006) DBG | domain test-preload-184006 has defined MAC address 52:54:00:b7:f5:32 in network mk-test-preload-184006
	I1031 18:42:08.190598  499771 main.go:134] libmachine: (test-preload-184006) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:f5:32", ip: ""} in network mk-test-preload-184006: {Iface:virbr1 ExpiryTime:2022-10-31 19:40:22 +0000 UTC Type:0 Mac:52:54:00:b7:f5:32 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:test-preload-184006 Clientid:01:52:54:00:b7:f5:32}
	I1031 18:42:08.190632  499771 main.go:134] libmachine: (test-preload-184006) DBG | domain test-preload-184006 has defined IP address 192.168.39.32 and MAC address 52:54:00:b7:f5:32 in network mk-test-preload-184006
	I1031 18:42:08.190811  499771 main.go:134] libmachine: (test-preload-184006) Calling .GetSSHPort
	I1031 18:42:08.191016  499771 main.go:134] libmachine: (test-preload-184006) Calling .GetSSHKeyPath
	I1031 18:42:08.191173  499771 main.go:134] libmachine: (test-preload-184006) Calling .GetSSHUsername
	I1031 18:42:08.191317  499771 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15242-478932/.minikube/machines/test-preload-184006/id_rsa Username:docker}
	I1031 18:42:08.280472  499771 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15242-478932/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1031 18:42:08.307272  499771 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15242-478932/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1031 18:42:08.332323  499771 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15242-478932/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1031 18:42:08.356193  499771 provision.go:86] duration metric: configureAuth took 309.235222ms
	I1031 18:42:08.356219  499771 buildroot.go:189] setting minikube options for container-runtime
	I1031 18:42:08.356379  499771 config.go:180] Loaded profile config "test-preload-184006": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.24.6
	I1031 18:42:08.356392  499771 machine.go:91] provisioned docker machine in 547.909397ms
	I1031 18:42:08.356402  499771 start.go:300] post-start starting for "test-preload-184006" (driver="kvm2")
	I1031 18:42:08.356411  499771 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1031 18:42:08.356443  499771 main.go:134] libmachine: (test-preload-184006) Calling .DriverName
	I1031 18:42:08.356752  499771 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1031 18:42:08.356788  499771 main.go:134] libmachine: (test-preload-184006) Calling .GetSSHHostname
	I1031 18:42:08.359370  499771 main.go:134] libmachine: (test-preload-184006) DBG | domain test-preload-184006 has defined MAC address 52:54:00:b7:f5:32 in network mk-test-preload-184006
	I1031 18:42:08.359744  499771 main.go:134] libmachine: (test-preload-184006) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:f5:32", ip: ""} in network mk-test-preload-184006: {Iface:virbr1 ExpiryTime:2022-10-31 19:40:22 +0000 UTC Type:0 Mac:52:54:00:b7:f5:32 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:test-preload-184006 Clientid:01:52:54:00:b7:f5:32}
	I1031 18:42:08.359780  499771 main.go:134] libmachine: (test-preload-184006) DBG | domain test-preload-184006 has defined IP address 192.168.39.32 and MAC address 52:54:00:b7:f5:32 in network mk-test-preload-184006
	I1031 18:42:08.359899  499771 main.go:134] libmachine: (test-preload-184006) Calling .GetSSHPort
	I1031 18:42:08.360090  499771 main.go:134] libmachine: (test-preload-184006) Calling .GetSSHKeyPath
	I1031 18:42:08.360254  499771 main.go:134] libmachine: (test-preload-184006) Calling .GetSSHUsername
	I1031 18:42:08.360399  499771 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15242-478932/.minikube/machines/test-preload-184006/id_rsa Username:docker}
	I1031 18:42:08.443190  499771 ssh_runner.go:195] Run: cat /etc/os-release
	I1031 18:42:08.447575  499771 info.go:137] Remote host: Buildroot 2021.02.12
	I1031 18:42:08.447596  499771 filesync.go:126] Scanning /home/jenkins/minikube-integration/15242-478932/.minikube/addons for local assets ...
	I1031 18:42:08.447675  499771 filesync.go:126] Scanning /home/jenkins/minikube-integration/15242-478932/.minikube/files for local assets ...
	I1031 18:42:08.447763  499771 filesync.go:149] local asset: /home/jenkins/minikube-integration/15242-478932/.minikube/files/etc/ssl/certs/4863142.pem -> 4863142.pem in /etc/ssl/certs
	I1031 18:42:08.447838  499771 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1031 18:42:08.455771  499771 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15242-478932/.minikube/files/etc/ssl/certs/4863142.pem --> /etc/ssl/certs/4863142.pem (1708 bytes)
	I1031 18:42:08.480158  499771 start.go:303] post-start completed in 123.74603ms
	I1031 18:42:08.480180  499771 fix.go:57] fixHost completed within 693.991926ms
	I1031 18:42:08.480202  499771 main.go:134] libmachine: (test-preload-184006) Calling .GetSSHHostname
	I1031 18:42:08.482819  499771 main.go:134] libmachine: (test-preload-184006) DBG | domain test-preload-184006 has defined MAC address 52:54:00:b7:f5:32 in network mk-test-preload-184006
	I1031 18:42:08.483184  499771 main.go:134] libmachine: (test-preload-184006) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:f5:32", ip: ""} in network mk-test-preload-184006: {Iface:virbr1 ExpiryTime:2022-10-31 19:40:22 +0000 UTC Type:0 Mac:52:54:00:b7:f5:32 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:test-preload-184006 Clientid:01:52:54:00:b7:f5:32}
	I1031 18:42:08.483225  499771 main.go:134] libmachine: (test-preload-184006) DBG | domain test-preload-184006 has defined IP address 192.168.39.32 and MAC address 52:54:00:b7:f5:32 in network mk-test-preload-184006
	I1031 18:42:08.483375  499771 main.go:134] libmachine: (test-preload-184006) Calling .GetSSHPort
	I1031 18:42:08.483546  499771 main.go:134] libmachine: (test-preload-184006) Calling .GetSSHKeyPath
	I1031 18:42:08.483707  499771 main.go:134] libmachine: (test-preload-184006) Calling .GetSSHKeyPath
	I1031 18:42:08.483830  499771 main.go:134] libmachine: (test-preload-184006) Calling .GetSSHUsername
	I1031 18:42:08.484053  499771 main.go:134] libmachine: Using SSH client type: native
	I1031 18:42:08.484170  499771 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ed4e0] 0x7f0660 <nil>  [] 0s} 192.168.39.32 22 <nil> <nil>}
	I1031 18:42:08.484182  499771 main.go:134] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1031 18:42:08.591307  499771 main.go:134] libmachine: SSH cmd err, output: <nil>: 1667241728.586988450
	
	I1031 18:42:08.591330  499771 fix.go:207] guest clock: 1667241728.586988450
	I1031 18:42:08.591338  499771 fix.go:220] Guest: 2022-10-31 18:42:08.58698845 +0000 UTC Remote: 2022-10-31 18:42:08.480183451 +0000 UTC m=+4.998114634 (delta=106.804999ms)
	I1031 18:42:08.591354  499771 fix.go:191] guest clock delta is within tolerance: 106.804999ms
	I1031 18:42:08.591359  499771 start.go:83] releasing machines lock for "test-preload-184006", held for 805.183267ms
	I1031 18:42:08.591394  499771 main.go:134] libmachine: (test-preload-184006) Calling .DriverName
	I1031 18:42:08.591624  499771 main.go:134] libmachine: (test-preload-184006) Calling .GetIP
	I1031 18:42:08.593989  499771 main.go:134] libmachine: (test-preload-184006) DBG | domain test-preload-184006 has defined MAC address 52:54:00:b7:f5:32 in network mk-test-preload-184006
	I1031 18:42:08.594296  499771 main.go:134] libmachine: (test-preload-184006) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:f5:32", ip: ""} in network mk-test-preload-184006: {Iface:virbr1 ExpiryTime:2022-10-31 19:40:22 +0000 UTC Type:0 Mac:52:54:00:b7:f5:32 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:test-preload-184006 Clientid:01:52:54:00:b7:f5:32}
	I1031 18:42:08.594351  499771 main.go:134] libmachine: (test-preload-184006) DBG | domain test-preload-184006 has defined IP address 192.168.39.32 and MAC address 52:54:00:b7:f5:32 in network mk-test-preload-184006
	I1031 18:42:08.594457  499771 main.go:134] libmachine: (test-preload-184006) Calling .DriverName
	I1031 18:42:08.595089  499771 main.go:134] libmachine: (test-preload-184006) Calling .DriverName
	I1031 18:42:08.595291  499771 main.go:134] libmachine: (test-preload-184006) Calling .DriverName
	I1031 18:42:08.595402  499771 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I1031 18:42:08.595452  499771 main.go:134] libmachine: (test-preload-184006) Calling .GetSSHHostname
	I1031 18:42:08.595523  499771 ssh_runner.go:195] Run: systemctl --version
	I1031 18:42:08.595552  499771 main.go:134] libmachine: (test-preload-184006) Calling .GetSSHHostname
	I1031 18:42:08.598244  499771 main.go:134] libmachine: (test-preload-184006) DBG | domain test-preload-184006 has defined MAC address 52:54:00:b7:f5:32 in network mk-test-preload-184006
	I1031 18:42:08.598565  499771 main.go:134] libmachine: (test-preload-184006) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:f5:32", ip: ""} in network mk-test-preload-184006: {Iface:virbr1 ExpiryTime:2022-10-31 19:40:22 +0000 UTC Type:0 Mac:52:54:00:b7:f5:32 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:test-preload-184006 Clientid:01:52:54:00:b7:f5:32}
	I1031 18:42:08.598590  499771 main.go:134] libmachine: (test-preload-184006) DBG | domain test-preload-184006 has defined IP address 192.168.39.32 and MAC address 52:54:00:b7:f5:32 in network mk-test-preload-184006
	I1031 18:42:08.598609  499771 main.go:134] libmachine: (test-preload-184006) DBG | domain test-preload-184006 has defined MAC address 52:54:00:b7:f5:32 in network mk-test-preload-184006
	I1031 18:42:08.598759  499771 main.go:134] libmachine: (test-preload-184006) Calling .GetSSHPort
	I1031 18:42:08.598997  499771 main.go:134] libmachine: (test-preload-184006) Calling .GetSSHKeyPath
	I1031 18:42:08.599099  499771 main.go:134] libmachine: (test-preload-184006) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:f5:32", ip: ""} in network mk-test-preload-184006: {Iface:virbr1 ExpiryTime:2022-10-31 19:40:22 +0000 UTC Type:0 Mac:52:54:00:b7:f5:32 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:test-preload-184006 Clientid:01:52:54:00:b7:f5:32}
	I1031 18:42:08.599143  499771 main.go:134] libmachine: (test-preload-184006) DBG | domain test-preload-184006 has defined IP address 192.168.39.32 and MAC address 52:54:00:b7:f5:32 in network mk-test-preload-184006
	I1031 18:42:08.599158  499771 main.go:134] libmachine: (test-preload-184006) Calling .GetSSHUsername
	I1031 18:42:08.599300  499771 main.go:134] libmachine: (test-preload-184006) Calling .GetSSHPort
	I1031 18:42:08.599297  499771 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15242-478932/.minikube/machines/test-preload-184006/id_rsa Username:docker}
	I1031 18:42:08.599484  499771 main.go:134] libmachine: (test-preload-184006) Calling .GetSSHKeyPath
	I1031 18:42:08.599656  499771 main.go:134] libmachine: (test-preload-184006) Calling .GetSSHUsername
	I1031 18:42:08.599830  499771 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15242-478932/.minikube/machines/test-preload-184006/id_rsa Username:docker}
	I1031 18:42:08.692904  499771 preload.go:132] Checking if preload exists for k8s version v1.24.6 and runtime containerd
	I1031 18:42:08.693010  499771 ssh_runner.go:195] Run: sudo crictl images --output json
	I1031 18:42:08.721791  499771 containerd.go:549] couldn't find preloaded image for "k8s.gcr.io/kube-apiserver:v1.24.6". assuming images are not preloaded.
	I1031 18:42:08.721852  499771 ssh_runner.go:195] Run: which lz4
	I1031 18:42:08.726000  499771 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1031 18:42:08.729899  499771 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1031 18:42:08.729926  499771 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15242-478932/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.6-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (458739102 bytes)
	I1031 18:42:10.998450  499771 containerd.go:496] Took 2.272466 seconds to copy over tarball
	I1031 18:42:10.998543  499771 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1031 18:42:14.226199  499771 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.227618439s)
	I1031 18:42:14.226229  499771 containerd.go:503] Took 3.227747 seconds t extract the tarball
	I1031 18:42:14.226239  499771 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1031 18:42:14.274045  499771 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1031 18:42:14.424324  499771 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1031 18:42:14.476005  499771 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1031 18:42:14.491309  499771 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1031 18:42:14.507051  499771 docker.go:189] disabling docker service ...
	I1031 18:42:14.507119  499771 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1031 18:42:14.524323  499771 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1031 18:42:14.536745  499771 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1031 18:42:14.672442  499771 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1031 18:42:14.833621  499771 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1031 18:42:14.846273  499771 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1031 18:42:14.871099  499771 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*sandbox_image = .*$|sandbox_image = "k8s.gcr.io/pause:3.7"|' -i /etc/containerd/config.toml"
	I1031 18:42:14.890035  499771 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*restrict_oom_score_adj = .*$|restrict_oom_score_adj = false|' -i /etc/containerd/config.toml"
	I1031 18:42:14.900300  499771 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*SystemdCgroup = .*$|SystemdCgroup = false|' -i /etc/containerd/config.toml"
	I1031 18:42:14.910546  499771 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*conf_dir = .*$|conf_dir = "/etc/cni/net.d"|' -i /etc/containerd/config.toml"
	I1031 18:42:14.921515  499771 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1031 18:42:14.931768  499771 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1031 18:42:14.940885  499771 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1031 18:42:15.133789  499771 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1031 18:42:15.161826  499771 start.go:451] Will wait 60s for socket path /run/containerd/containerd.sock
	I1031 18:42:15.161909  499771 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1031 18:42:15.166807  499771 retry.go:31] will retry after 1.104660288s: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I1031 18:42:16.272105  499771 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1031 18:42:16.278365  499771 retry.go:31] will retry after 2.160763633s: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I1031 18:42:18.440202  499771 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1031 18:42:18.446607  499771 start.go:472] Will wait 60s for crictl version
	I1031 18:42:18.446671  499771 ssh_runner.go:195] Run: sudo crictl version
	I1031 18:42:18.533139  499771 start.go:481] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.6.8
	RuntimeApiVersion:  v1alpha2
	I1031 18:42:18.533216  499771 ssh_runner.go:195] Run: containerd --version
	I1031 18:42:18.564086  499771 ssh_runner.go:195] Run: containerd --version
	I1031 18:42:18.642240  499771 out.go:177] * Preparing Kubernetes v1.24.6 on containerd 1.6.8 ...
	I1031 18:42:18.643710  499771 main.go:134] libmachine: (test-preload-184006) Calling .GetIP
	I1031 18:42:18.646432  499771 main.go:134] libmachine: (test-preload-184006) DBG | domain test-preload-184006 has defined MAC address 52:54:00:b7:f5:32 in network mk-test-preload-184006
	I1031 18:42:18.646807  499771 main.go:134] libmachine: (test-preload-184006) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:f5:32", ip: ""} in network mk-test-preload-184006: {Iface:virbr1 ExpiryTime:2022-10-31 19:40:22 +0000 UTC Type:0 Mac:52:54:00:b7:f5:32 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:test-preload-184006 Clientid:01:52:54:00:b7:f5:32}
	I1031 18:42:18.646837  499771 main.go:134] libmachine: (test-preload-184006) DBG | domain test-preload-184006 has defined IP address 192.168.39.32 and MAC address 52:54:00:b7:f5:32 in network mk-test-preload-184006
	I1031 18:42:18.647029  499771 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1031 18:42:18.651892  499771 preload.go:132] Checking if preload exists for k8s version v1.24.6 and runtime containerd
	I1031 18:42:18.651956  499771 ssh_runner.go:195] Run: sudo crictl images --output json
	I1031 18:42:18.701172  499771 containerd.go:553] all images are preloaded for containerd runtime.
	I1031 18:42:18.701198  499771 containerd.go:467] Images already preloaded, skipping extraction
	I1031 18:42:18.701260  499771 ssh_runner.go:195] Run: sudo crictl images --output json
	I1031 18:42:18.742644  499771 containerd.go:553] all images are preloaded for containerd runtime.
	I1031 18:42:18.742675  499771 cache_images.go:84] Images are preloaded, skipping loading
	I1031 18:42:18.742733  499771 ssh_runner.go:195] Run: sudo crictl info
	I1031 18:42:18.828091  499771 cni.go:95] Creating CNI manager for ""
	I1031 18:42:18.828129  499771 cni.go:165] "kvm2" driver + containerd runtime found, recommending bridge
	I1031 18:42:18.828143  499771 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1031 18:42:18.828161  499771 kubeadm.go:156] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.32 APIServerPort:8443 KubernetesVersion:v1.24.6 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-184006 NodeName:test-preload-184006 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.32"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.32 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false}
	I1031 18:42:18.828344  499771 kubeadm.go:161] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.32
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "test-preload-184006"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.32
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.32"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.6
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1031 18:42:18.828453  499771 kubeadm.go:962] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.6/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=test-preload-184006 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.32 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.6 ClusterName:test-preload-184006 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1031 18:42:18.828513  499771 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.6
	I1031 18:42:18.844474  499771 binaries.go:44] Found k8s binaries, skipping transfer
	I1031 18:42:18.844551  499771 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1031 18:42:18.858877  499771 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (513 bytes)
	I1031 18:42:18.887051  499771 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1031 18:42:18.917115  499771 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2051 bytes)
	I1031 18:42:18.950221  499771 ssh_runner.go:195] Run: grep 192.168.39.32	control-plane.minikube.internal$ /etc/hosts
	I1031 18:42:18.957835  499771 certs.go:54] Setting up /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/test-preload-184006 for IP: 192.168.39.32
	I1031 18:42:18.957938  499771 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/15242-478932/.minikube/ca.key
	I1031 18:42:18.957990  499771 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/15242-478932/.minikube/proxy-client-ca.key
	I1031 18:42:18.958074  499771 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/test-preload-184006/client.key
	I1031 18:42:18.958153  499771 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/test-preload-184006/apiserver.key.ce938653
	I1031 18:42:18.958206  499771 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/test-preload-184006/proxy-client.key
	I1031 18:42:18.958317  499771 certs.go:388] found cert: /home/jenkins/minikube-integration/15242-478932/.minikube/certs/home/jenkins/minikube-integration/15242-478932/.minikube/certs/486314.pem (1338 bytes)
	W1031 18:42:18.958354  499771 certs.go:384] ignoring /home/jenkins/minikube-integration/15242-478932/.minikube/certs/home/jenkins/minikube-integration/15242-478932/.minikube/certs/486314_empty.pem, impossibly tiny 0 bytes
	I1031 18:42:18.958371  499771 certs.go:388] found cert: /home/jenkins/minikube-integration/15242-478932/.minikube/certs/home/jenkins/minikube-integration/15242-478932/.minikube/certs/ca-key.pem (1675 bytes)
	I1031 18:42:18.958420  499771 certs.go:388] found cert: /home/jenkins/minikube-integration/15242-478932/.minikube/certs/home/jenkins/minikube-integration/15242-478932/.minikube/certs/ca.pem (1078 bytes)
	I1031 18:42:18.958453  499771 certs.go:388] found cert: /home/jenkins/minikube-integration/15242-478932/.minikube/certs/home/jenkins/minikube-integration/15242-478932/.minikube/certs/cert.pem (1123 bytes)
	I1031 18:42:18.958487  499771 certs.go:388] found cert: /home/jenkins/minikube-integration/15242-478932/.minikube/certs/home/jenkins/minikube-integration/15242-478932/.minikube/certs/key.pem (1675 bytes)
	I1031 18:42:18.958561  499771 certs.go:388] found cert: /home/jenkins/minikube-integration/15242-478932/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/15242-478932/.minikube/files/etc/ssl/certs/4863142.pem (1708 bytes)
	I1031 18:42:18.959113  499771 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/test-preload-184006/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1031 18:42:19.008266  499771 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/test-preload-184006/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1031 18:42:19.053561  499771 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/test-preload-184006/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1031 18:42:19.092090  499771 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/test-preload-184006/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1031 18:42:19.124161  499771 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15242-478932/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1031 18:42:19.159834  499771 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15242-478932/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1031 18:42:19.211307  499771 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15242-478932/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1031 18:42:19.246015  499771 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15242-478932/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1031 18:42:19.272090  499771 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15242-478932/.minikube/certs/486314.pem --> /usr/share/ca-certificates/486314.pem (1338 bytes)
	I1031 18:42:19.295258  499771 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15242-478932/.minikube/files/etc/ssl/certs/4863142.pem --> /usr/share/ca-certificates/4863142.pem (1708 bytes)
	I1031 18:42:19.317858  499771 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15242-478932/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1031 18:42:19.370372  499771 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1031 18:42:19.397650  499771 ssh_runner.go:195] Run: openssl version
	I1031 18:42:19.404314  499771 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1031 18:42:19.416678  499771 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1031 18:42:19.422646  499771 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Oct 31 18:01 /usr/share/ca-certificates/minikubeCA.pem
	I1031 18:42:19.422716  499771 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1031 18:42:19.429327  499771 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1031 18:42:19.451933  499771 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/486314.pem && ln -fs /usr/share/ca-certificates/486314.pem /etc/ssl/certs/486314.pem"
	I1031 18:42:19.466353  499771 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/486314.pem
	I1031 18:42:19.471727  499771 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Oct 31 18:07 /usr/share/ca-certificates/486314.pem
	I1031 18:42:19.471781  499771 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/486314.pem
	I1031 18:42:19.489248  499771 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/486314.pem /etc/ssl/certs/51391683.0"
	I1031 18:42:19.510138  499771 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4863142.pem && ln -fs /usr/share/ca-certificates/4863142.pem /etc/ssl/certs/4863142.pem"
	I1031 18:42:19.531349  499771 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4863142.pem
	I1031 18:42:19.537122  499771 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Oct 31 18:07 /usr/share/ca-certificates/4863142.pem
	I1031 18:42:19.537163  499771 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4863142.pem
	I1031 18:42:19.553344  499771 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4863142.pem /etc/ssl/certs/3ec20f2e.0"
	I1031 18:42:19.575662  499771 kubeadm.go:396] StartCluster: {Name:test-preload-184006 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15159/minikube-v1.27.0-1666206003-15159-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVers
ion:v1.24.6 ClusterName:test-preload-184006 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.32 Port:8443 KubernetesVersion:v1.24.6 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PV
ersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1031 18:42:19.575787  499771 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1031 18:42:19.575847  499771 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1031 18:42:19.668546  499771 cri.go:87] found id: "e9858f06237c0e4abd2f9cccebe32689cbb5f07fd94fdab516df3b52652d1628"
	I1031 18:42:19.668584  499771 cri.go:87] found id: "9bcba87ab2642bce25a8e6ff58dbd95a33822d0041d733e9682521ae2d8532a2"
	I1031 18:42:19.668594  499771 cri.go:87] found id: ""
	I1031 18:42:19.668666  499771 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I1031 18:42:19.734738  499771 cri.go:114] JSON = [{"ociVersion":"1.0.2-dev","id":"02072def910a6372661bf8f4bc450fa1c44c920af401ba651af037214185e2e3","pid":3057,"status":"created","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/02072def910a6372661bf8f4bc450fa1c44c920af401ba651af037214185e2e3","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/02072def910a6372661bf8f4bc450fa1c44c920af401ba651af037214185e2e3/rootfs","created":"2022-10-31T18:42:19.493708866Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"204","io.kubernetes.cri.sandbox-id":"02072def910a6372661bf8f4bc450fa1c44c920af401ba651af037214185e2e3","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-test-preload-184006_78672c8301e82ad4cf36dde5d4357612","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-controller-
manager-test-preload-184006","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"9bcba87ab2642bce25a8e6ff58dbd95a33822d0041d733e9682521ae2d8532a2","pid":2970,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9bcba87ab2642bce25a8e6ff58dbd95a33822d0041d733e9682521ae2d8532a2","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9bcba87ab2642bce25a8e6ff58dbd95a33822d0041d733e9682521ae2d8532a2/rootfs","created":"2022-10-31T18:42:19.197463119Z","annotations":{"io.kubernetes.cri.container-name":"coredns","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"k8s.gcr.io/coredns/coredns:v1.8.6","io.kubernetes.cri.sandbox-id":"bce9ad3662cf7e43e9c0fb30dc3860cbf13b5865b72ec9d3171326e41dc42dfa","io.kubernetes.cri.sandbox-name":"coredns-6d4b75cb6d-s4vgp","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"bce9ad3662cf7e43e9c0fb30dc3860cbf13b5865b72ec9d3171326e41
dc42dfa","pid":2859,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/bce9ad3662cf7e43e9c0fb30dc3860cbf13b5865b72ec9d3171326e41dc42dfa","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/bce9ad3662cf7e43e9c0fb30dc3860cbf13b5865b72ec9d3171326e41dc42dfa/rootfs","created":"2022-10-31T18:42:18.519131915Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"bce9ad3662cf7e43e9c0fb30dc3860cbf13b5865b72ec9d3171326e41dc42dfa","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_coredns-6d4b75cb6d-s4vgp_21edc182-008f-46db-b8a8-459fba2a89af","io.kubernetes.cri.sandbox-memory":"178257920","io.kubernetes.cri.sandbox-name":"coredns-6d4b75cb6d-s4vgp","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"bf1282f55024603e0fde8852645d9f4b681bb9318557
e063f2c7185b94527097","pid":2865,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/bf1282f55024603e0fde8852645d9f4b681bb9318557e063f2c7185b94527097","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/bf1282f55024603e0fde8852645d9f4b681bb9318557e063f2c7185b94527097/rootfs","created":"2022-10-31T18:42:18.672092046Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"2","io.kubernetes.cri.sandbox-id":"bf1282f55024603e0fde8852645d9f4b681bb9318557e063f2c7185b94527097","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-j9mw4_13e663e3-979f-4420-88bb-b73be1318571","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-proxy-j9mw4","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"c622cad5fac64932a2b12f1edeb29c67f1df1d3c633bdac4a95181cee
00a5e62","pid":2857,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c622cad5fac64932a2b12f1edeb29c67f1df1d3c633bdac4a95181cee00a5e62","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c622cad5fac64932a2b12f1edeb29c67f1df1d3c633bdac4a95181cee00a5e62/rootfs","created":"2022-10-31T18:42:18.530443281Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"c622cad5fac64932a2b12f1edeb29c67f1df1d3c633bdac4a95181cee00a5e62","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-test-preload-184006_5fb4ef8f863803479dcdcca88c3a7481","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"etcd-test-preload-184006","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"e9858f06237c0e4abd2f9cccebe32689cbb5f07fd94fdab516df3b52
652d1628","pid":3071,"status":"created","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e9858f06237c0e4abd2f9cccebe32689cbb5f07fd94fdab516df3b52652d1628","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e9858f06237c0e4abd2f9cccebe32689cbb5f07fd94fdab516df3b52652d1628/rootfs","created":"2022-10-31T18:42:19.553827868Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"k8s.gcr.io/etcd:3.5.3-0","io.kubernetes.cri.sandbox-id":"c622cad5fac64932a2b12f1edeb29c67f1df1d3c633bdac4a95181cee00a5e62","io.kubernetes.cri.sandbox-name":"etcd-test-preload-184006","io.kubernetes.cri.sandbox-namespace":"kube-system"},"owner":"root"}]
	I1031 18:42:19.734973  499771 cri.go:124] list returned 6 containers
	I1031 18:42:19.734994  499771 cri.go:127] container: {ID:02072def910a6372661bf8f4bc450fa1c44c920af401ba651af037214185e2e3 Status:created}
	I1031 18:42:19.735014  499771 cri.go:129] skipping 02072def910a6372661bf8f4bc450fa1c44c920af401ba651af037214185e2e3 - not in ps
	I1031 18:42:19.735031  499771 cri.go:127] container: {ID:9bcba87ab2642bce25a8e6ff58dbd95a33822d0041d733e9682521ae2d8532a2 Status:running}
	I1031 18:42:19.735045  499771 cri.go:133] skipping {9bcba87ab2642bce25a8e6ff58dbd95a33822d0041d733e9682521ae2d8532a2 running}: state = "running", want "paused"
	I1031 18:42:19.735061  499771 cri.go:127] container: {ID:bce9ad3662cf7e43e9c0fb30dc3860cbf13b5865b72ec9d3171326e41dc42dfa Status:running}
	I1031 18:42:19.735069  499771 cri.go:129] skipping bce9ad3662cf7e43e9c0fb30dc3860cbf13b5865b72ec9d3171326e41dc42dfa - not in ps
	I1031 18:42:19.735081  499771 cri.go:127] container: {ID:bf1282f55024603e0fde8852645d9f4b681bb9318557e063f2c7185b94527097 Status:running}
	I1031 18:42:19.735094  499771 cri.go:129] skipping bf1282f55024603e0fde8852645d9f4b681bb9318557e063f2c7185b94527097 - not in ps
	I1031 18:42:19.735106  499771 cri.go:127] container: {ID:c622cad5fac64932a2b12f1edeb29c67f1df1d3c633bdac4a95181cee00a5e62 Status:running}
	I1031 18:42:19.735118  499771 cri.go:129] skipping c622cad5fac64932a2b12f1edeb29c67f1df1d3c633bdac4a95181cee00a5e62 - not in ps
	I1031 18:42:19.735129  499771 cri.go:127] container: {ID:e9858f06237c0e4abd2f9cccebe32689cbb5f07fd94fdab516df3b52652d1628 Status:created}
	I1031 18:42:19.735146  499771 cri.go:133] skipping {e9858f06237c0e4abd2f9cccebe32689cbb5f07fd94fdab516df3b52652d1628 created}: state = "created", want "paused"
	I1031 18:42:19.735203  499771 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1031 18:42:19.772328  499771 kubeadm.go:411] found existing configuration files, will attempt cluster restart
	I1031 18:42:19.772357  499771 kubeadm.go:627] restartCluster start
	I1031 18:42:19.772413  499771 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1031 18:42:19.800047  499771 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1031 18:42:19.800741  499771 kubeconfig.go:92] found "test-preload-184006" server: "https://192.168.39.32:8443"
	I1031 18:42:19.801675  499771 kapi.go:59] client config for test-preload-184006: &rest.Config{Host:"https://192.168.39.32:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15242-478932/.minikube/profiles/test-preload-184006/client.crt", KeyFile:"/home/jenkins/minikube-integration/15242-478932/.minikube/profiles/test-preload-184006/client.key", CAFile:"/home/jenkins/minikube-integration/15242-478932/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint
8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1782ac0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1031 18:42:19.802299  499771 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1031 18:42:19.830051  499771 kubeadm.go:594] needs reconfigure: configs differ:
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -38,7 +38,7 @@
	     dataDir: /var/lib/minikube/etcd
	     extraArgs:
	       proxy-refresh-interval: "70000"
	-kubernetesVersion: v1.24.4
	+kubernetesVersion: v1.24.6
	 networking:
	   dnsDomain: cluster.local
	   podSubnet: "10.244.0.0/16"
	
	-- /stdout --
	I1031 18:42:19.830073  499771 kubeadm.go:1114] stopping kube-system containers ...
	I1031 18:42:19.830091  499771 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I1031 18:42:19.830159  499771 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1031 18:42:19.910620  499771 cri.go:87] found id: "e9858f06237c0e4abd2f9cccebe32689cbb5f07fd94fdab516df3b52652d1628"
	I1031 18:42:19.910652  499771 cri.go:87] found id: "9bcba87ab2642bce25a8e6ff58dbd95a33822d0041d733e9682521ae2d8532a2"
	I1031 18:42:19.910659  499771 cri.go:87] found id: ""
	I1031 18:42:19.910665  499771 cri.go:232] Stopping containers: [e9858f06237c0e4abd2f9cccebe32689cbb5f07fd94fdab516df3b52652d1628 9bcba87ab2642bce25a8e6ff58dbd95a33822d0041d733e9682521ae2d8532a2]
	I1031 18:42:19.910720  499771 ssh_runner.go:195] Run: which crictl
	I1031 18:42:19.929248  499771 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop e9858f06237c0e4abd2f9cccebe32689cbb5f07fd94fdab516df3b52652d1628 9bcba87ab2642bce25a8e6ff58dbd95a33822d0041d733e9682521ae2d8532a2
	I1031 18:42:20.217713  499771 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1031 18:42:20.265869  499771 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1031 18:42:20.276510  499771 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Oct 31 18:40 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5653 Oct 31 18:40 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2015 Oct 31 18:41 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5601 Oct 31 18:40 /etc/kubernetes/scheduler.conf
	
	I1031 18:42:20.276564  499771 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1031 18:42:20.286384  499771 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1031 18:42:20.295574  499771 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1031 18:42:20.304525  499771 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1031 18:42:20.304598  499771 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1031 18:42:20.314049  499771 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1031 18:42:20.322920  499771 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1031 18:42:20.322978  499771 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1031 18:42:20.332292  499771 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1031 18:42:20.342305  499771 kubeadm.go:704] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1031 18:42:20.342332  499771 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1031 18:42:20.437219  499771 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1031 18:42:21.504922  499771 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.067647861s)
	I1031 18:42:21.504959  499771 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1031 18:42:21.840808  499771 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1031 18:42:21.910142  499771 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1031 18:42:21.984121  499771 api_server.go:51] waiting for apiserver process to appear ...
	I1031 18:42:21.984188  499771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 18:42:22.498463  499771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 18:42:22.998024  499771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 18:42:23.498784  499771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 18:42:23.998903  499771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 18:42:24.020122  499771 api_server.go:71] duration metric: took 2.035984712s to wait for apiserver process to appear ...
	I1031 18:42:24.020155  499771 api_server.go:87] waiting for apiserver healthz status ...
	I1031 18:42:24.020177  499771 api_server.go:252] Checking apiserver healthz at https://192.168.39.32:8443/healthz ...
	I1031 18:42:27.560437  499771 api_server.go:278] https://192.168.39.32:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1031 18:42:27.560467  499771 api_server.go:102] status: https://192.168.39.32:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1031 18:42:28.061474  499771 api_server.go:252] Checking apiserver healthz at https://192.168.39.32:8443/healthz ...
	I1031 18:42:28.066712  499771 api_server.go:278] https://192.168.39.32:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1031 18:42:28.066753  499771 api_server.go:102] status: https://192.168.39.32:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1031 18:42:28.561499  499771 api_server.go:252] Checking apiserver healthz at https://192.168.39.32:8443/healthz ...
	I1031 18:42:28.574208  499771 api_server.go:278] https://192.168.39.32:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1031 18:42:28.574241  499771 api_server.go:102] status: https://192.168.39.32:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1031 18:42:29.060662  499771 api_server.go:252] Checking apiserver healthz at https://192.168.39.32:8443/healthz ...
	I1031 18:42:29.067149  499771 api_server.go:278] https://192.168.39.32:8443/healthz returned 200:
	ok
	I1031 18:42:29.076508  499771 api_server.go:140] control plane version: v1.24.6
	I1031 18:42:29.076538  499771 api_server.go:130] duration metric: took 5.056375572s to wait for apiserver health ...
	I1031 18:42:29.076547  499771 cni.go:95] Creating CNI manager for ""
	I1031 18:42:29.076557  499771 cni.go:165] "kvm2" driver + containerd runtime found, recommending bridge
	I1031 18:42:29.078941  499771 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1031 18:42:29.080528  499771 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1031 18:42:29.090943  499771 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1031 18:42:29.109087  499771 system_pods.go:43] waiting for kube-system pods to appear ...
	I1031 18:42:29.120290  499771 system_pods.go:59] 7 kube-system pods found
	I1031 18:42:29.120317  499771 system_pods.go:61] "coredns-6d4b75cb6d-s4vgp" [21edc182-008f-46db-b8a8-459fba2a89af] Running
	I1031 18:42:29.120322  499771 system_pods.go:61] "etcd-test-preload-184006" [a9e52a87-e795-4f17-98e1-3bdaa8f915af] Running
	I1031 18:42:29.120328  499771 system_pods.go:61] "kube-apiserver-test-preload-184006" [5392d432-1d02-4bfe-ae4a-49fa90749667] Pending
	I1031 18:42:29.120333  499771 system_pods.go:61] "kube-controller-manager-test-preload-184006" [831688a0-c9ce-4b19-844f-8a75367559c1] Pending
	I1031 18:42:29.120336  499771 system_pods.go:61] "kube-proxy-j9mw4" [13e663e3-979f-4420-88bb-b73be1318571] Running
	I1031 18:42:29.120340  499771 system_pods.go:61] "kube-scheduler-test-preload-184006" [fe69dd65-61aa-408d-bed6-2017c705df4b] Pending
	I1031 18:42:29.120346  499771 system_pods.go:61] "storage-provisioner" [65b625ba-eecc-4594-8502-3158c39c55da] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1031 18:42:29.120352  499771 system_pods.go:74] duration metric: took 11.241941ms to wait for pod list to return data ...
	I1031 18:42:29.120358  499771 node_conditions.go:102] verifying NodePressure condition ...
	I1031 18:42:29.123678  499771 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1031 18:42:29.123703  499771 node_conditions.go:123] node cpu capacity is 2
	I1031 18:42:29.123713  499771 node_conditions.go:105] duration metric: took 3.350347ms to run NodePressure ...
	I1031 18:42:29.123728  499771 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.6:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1031 18:42:29.364115  499771 kubeadm.go:763] waiting for restarted kubelet to initialise ...
	I1031 18:42:29.368512  499771 kubeadm.go:778] kubelet initialised
	I1031 18:42:29.368542  499771 kubeadm.go:779] duration metric: took 4.389929ms waiting for restarted kubelet to initialise ...
	I1031 18:42:29.368551  499771 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1031 18:42:29.372814  499771 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-s4vgp" in "kube-system" namespace to be "Ready" ...
	I1031 18:42:29.387184  499771 pod_ready.go:92] pod "coredns-6d4b75cb6d-s4vgp" in "kube-system" namespace has status "Ready":"True"
	I1031 18:42:29.387203  499771 pod_ready.go:81] duration metric: took 14.367214ms waiting for pod "coredns-6d4b75cb6d-s4vgp" in "kube-system" namespace to be "Ready" ...
	I1031 18:42:29.387211  499771 pod_ready.go:78] waiting up to 4m0s for pod "etcd-test-preload-184006" in "kube-system" namespace to be "Ready" ...
	I1031 18:42:29.392918  499771 pod_ready.go:92] pod "etcd-test-preload-184006" in "kube-system" namespace has status "Ready":"True"
	I1031 18:42:29.392941  499771 pod_ready.go:81] duration metric: took 5.722627ms waiting for pod "etcd-test-preload-184006" in "kube-system" namespace to be "Ready" ...
	I1031 18:42:29.392951  499771 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-test-preload-184006" in "kube-system" namespace to be "Ready" ...
	I1031 18:42:31.406258  499771 pod_ready.go:102] pod "kube-apiserver-test-preload-184006" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I1031 18:42:33.905273  499771 pod_ready.go:102] pod "kube-apiserver-test-preload-184006" in "kube-system" namespace has status "Ready":"False"
	I1031 18:42:35.954553  499771 pod_ready.go:102] pod "kube-apiserver-test-preload-184006" in "kube-system" namespace has status "Ready":"False"
	I1031 18:42:38.405269  499771 pod_ready.go:102] pod "kube-apiserver-test-preload-184006" in "kube-system" namespace has status "Ready":"False"
	I1031 18:42:40.906015  499771 pod_ready.go:102] pod "kube-apiserver-test-preload-184006" in "kube-system" namespace has status "Ready":"False"
	I1031 18:42:42.404067  499771 pod_ready.go:92] pod "kube-apiserver-test-preload-184006" in "kube-system" namespace has status "Ready":"True"
	I1031 18:42:42.404101  499771 pod_ready.go:81] duration metric: took 13.011142355s waiting for pod "kube-apiserver-test-preload-184006" in "kube-system" namespace to be "Ready" ...
	I1031 18:42:42.404111  499771 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-test-preload-184006" in "kube-system" namespace to be "Ready" ...
	I1031 18:42:42.409763  499771 pod_ready.go:92] pod "kube-controller-manager-test-preload-184006" in "kube-system" namespace has status "Ready":"True"
	I1031 18:42:42.409786  499771 pod_ready.go:81] duration metric: took 5.668257ms waiting for pod "kube-controller-manager-test-preload-184006" in "kube-system" namespace to be "Ready" ...
	I1031 18:42:42.409797  499771 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-j9mw4" in "kube-system" namespace to be "Ready" ...
	I1031 18:42:42.411766  499771 pod_ready.go:97] error getting pod "kube-proxy-j9mw4" in "kube-system" namespace (skipping!): pods "kube-proxy-j9mw4" not found
	I1031 18:42:42.411786  499771 pod_ready.go:81] duration metric: took 1.982541ms waiting for pod "kube-proxy-j9mw4" in "kube-system" namespace to be "Ready" ...
	E1031 18:42:42.411796  499771 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "kube-proxy-j9mw4" in "kube-system" namespace (skipping!): pods "kube-proxy-j9mw4" not found
	I1031 18:42:42.411803  499771 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-test-preload-184006" in "kube-system" namespace to be "Ready" ...
	I1031 18:42:42.416212  499771 pod_ready.go:92] pod "kube-scheduler-test-preload-184006" in "kube-system" namespace has status "Ready":"True"
	I1031 18:42:42.416236  499771 pod_ready.go:81] duration metric: took 4.420339ms waiting for pod "kube-scheduler-test-preload-184006" in "kube-system" namespace to be "Ready" ...
	I1031 18:42:42.416247  499771 pod_ready.go:38] duration metric: took 13.047680635s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1031 18:42:42.416267  499771 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1031 18:42:42.429044  499771 ops.go:34] apiserver oom_adj: -16
	I1031 18:42:42.429067  499771 kubeadm.go:631] restartCluster took 22.65670159s
	I1031 18:42:42.429076  499771 kubeadm.go:398] StartCluster complete in 22.853422668s
	I1031 18:42:42.429095  499771 settings.go:142] acquiring lock: {Name:mk5c8e1d2122318318afc2d7ee4aacad9fdcdc6f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 18:42:42.429252  499771 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/15242-478932/kubeconfig
	I1031 18:42:42.429917  499771 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15242-478932/kubeconfig: {Name:mkcadd42776d07815e680ad373b1e2c1348a5620 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 18:42:42.430642  499771 kapi.go:59] client config for test-preload-184006: &rest.Config{Host:"https://192.168.39.32:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15242-478932/.minikube/profiles/test-preload-184006/client.crt", KeyFile:"/home/jenkins/minikube-integration/15242-478932/.minikube/profiles/test-preload-184006/client.key", CAFile:"/home/jenkins/minikube-integration/15242-478932/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint
8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1782ac0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1031 18:42:42.433277  499771 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "test-preload-184006" rescaled to 1
	I1031 18:42:42.433367  499771 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1031 18:42:42.433388  499771 addons.go:412] enableAddons start: toEnable=map[default-storageclass:true storage-provisioner:true], additional=[]
	I1031 18:42:42.433358  499771 start.go:212] Will wait 6m0s for node &{Name: IP:192.168.39.32 Port:8443 KubernetesVersion:v1.24.6 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1031 18:42:42.433443  499771 addons.go:65] Setting storage-provisioner=true in profile "test-preload-184006"
	I1031 18:42:42.433464  499771 addons.go:153] Setting addon storage-provisioner=true in "test-preload-184006"
	W1031 18:42:42.433478  499771 addons.go:162] addon storage-provisioner should already be in state true
	I1031 18:42:42.433545  499771 host.go:66] Checking if "test-preload-184006" exists ...
	I1031 18:42:42.433464  499771 addons.go:65] Setting default-storageclass=true in profile "test-preload-184006"
	I1031 18:42:42.436576  499771 out.go:177] * Verifying Kubernetes components...
	I1031 18:42:42.433594  499771 config.go:180] Loaded profile config "test-preload-184006": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.24.6
	I1031 18:42:42.433604  499771 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-184006"
	I1031 18:42:42.433914  499771 main.go:134] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1031 18:42:42.438409  499771 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1031 18:42:42.438429  499771 main.go:134] libmachine: Launching plugin server for driver kvm2
	I1031 18:42:42.438856  499771 main.go:134] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1031 18:42:42.438904  499771 main.go:134] libmachine: Launching plugin server for driver kvm2
	I1031 18:42:42.453714  499771 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:34635
	I1031 18:42:42.453744  499771 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:33331
	I1031 18:42:42.454174  499771 main.go:134] libmachine: () Calling .GetVersion
	I1031 18:42:42.454228  499771 main.go:134] libmachine: () Calling .GetVersion
	I1031 18:42:42.454716  499771 main.go:134] libmachine: Using API Version  1
	I1031 18:42:42.454741  499771 main.go:134] libmachine: () Calling .SetConfigRaw
	I1031 18:42:42.454853  499771 main.go:134] libmachine: Using API Version  1
	I1031 18:42:42.454884  499771 main.go:134] libmachine: () Calling .SetConfigRaw
	I1031 18:42:42.455090  499771 main.go:134] libmachine: () Calling .GetMachineName
	I1031 18:42:42.455247  499771 main.go:134] libmachine: () Calling .GetMachineName
	I1031 18:42:42.455433  499771 main.go:134] libmachine: (test-preload-184006) Calling .GetState
	I1031 18:42:42.455695  499771 main.go:134] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1031 18:42:42.455748  499771 main.go:134] libmachine: Launching plugin server for driver kvm2
	I1031 18:42:42.458142  499771 kapi.go:59] client config for test-preload-184006: &rest.Config{Host:"https://192.168.39.32:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15242-478932/.minikube/profiles/test-preload-184006/client.crt", KeyFile:"/home/jenkins/minikube-integration/15242-478932/.minikube/profiles/test-preload-184006/client.key", CAFile:"/home/jenkins/minikube-integration/15242-478932/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint
8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1782ac0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1031 18:42:42.470011  499771 addons.go:153] Setting addon default-storageclass=true in "test-preload-184006"
	W1031 18:42:42.470032  499771 addons.go:162] addon default-storageclass should already be in state true
	I1031 18:42:42.470057  499771 host.go:66] Checking if "test-preload-184006" exists ...
	I1031 18:42:42.470438  499771 main.go:134] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1031 18:42:42.470484  499771 main.go:134] libmachine: Launching plugin server for driver kvm2
	I1031 18:42:42.472025  499771 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:36379
	I1031 18:42:42.472441  499771 main.go:134] libmachine: () Calling .GetVersion
	I1031 18:42:42.472979  499771 main.go:134] libmachine: Using API Version  1
	I1031 18:42:42.473011  499771 main.go:134] libmachine: () Calling .SetConfigRaw
	I1031 18:42:42.473374  499771 main.go:134] libmachine: () Calling .GetMachineName
	I1031 18:42:42.473557  499771 main.go:134] libmachine: (test-preload-184006) Calling .GetState
	I1031 18:42:42.475177  499771 main.go:134] libmachine: (test-preload-184006) Calling .DriverName
	I1031 18:42:42.477972  499771 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1031 18:42:42.479797  499771 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1031 18:42:42.479819  499771 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1031 18:42:42.479836  499771 main.go:134] libmachine: (test-preload-184006) Calling .GetSSHHostname
	I1031 18:42:42.482970  499771 main.go:134] libmachine: (test-preload-184006) DBG | domain test-preload-184006 has defined MAC address 52:54:00:b7:f5:32 in network mk-test-preload-184006
	I1031 18:42:42.483406  499771 main.go:134] libmachine: (test-preload-184006) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:f5:32", ip: ""} in network mk-test-preload-184006: {Iface:virbr1 ExpiryTime:2022-10-31 19:40:22 +0000 UTC Type:0 Mac:52:54:00:b7:f5:32 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:test-preload-184006 Clientid:01:52:54:00:b7:f5:32}
	I1031 18:42:42.483454  499771 main.go:134] libmachine: (test-preload-184006) DBG | domain test-preload-184006 has defined IP address 192.168.39.32 and MAC address 52:54:00:b7:f5:32 in network mk-test-preload-184006
	I1031 18:42:42.483631  499771 main.go:134] libmachine: (test-preload-184006) Calling .GetSSHPort
	I1031 18:42:42.483835  499771 main.go:134] libmachine: (test-preload-184006) Calling .GetSSHKeyPath
	I1031 18:42:42.484019  499771 main.go:134] libmachine: (test-preload-184006) Calling .GetSSHUsername
	I1031 18:42:42.484165  499771 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15242-478932/.minikube/machines/test-preload-184006/id_rsa Username:docker}
	I1031 18:42:42.487419  499771 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:32917
	I1031 18:42:42.487806  499771 main.go:134] libmachine: () Calling .GetVersion
	I1031 18:42:42.488284  499771 main.go:134] libmachine: Using API Version  1
	I1031 18:42:42.488310  499771 main.go:134] libmachine: () Calling .SetConfigRaw
	I1031 18:42:42.488638  499771 main.go:134] libmachine: () Calling .GetMachineName
	I1031 18:42:42.489097  499771 main.go:134] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1031 18:42:42.489136  499771 main.go:134] libmachine: Launching plugin server for driver kvm2
	I1031 18:42:42.505624  499771 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:33595
	I1031 18:42:42.506066  499771 main.go:134] libmachine: () Calling .GetVersion
	I1031 18:42:42.506556  499771 main.go:134] libmachine: Using API Version  1
	I1031 18:42:42.506594  499771 main.go:134] libmachine: () Calling .SetConfigRaw
	I1031 18:42:42.506909  499771 main.go:134] libmachine: () Calling .GetMachineName
	I1031 18:42:42.507110  499771 main.go:134] libmachine: (test-preload-184006) Calling .GetState
	I1031 18:42:42.508677  499771 main.go:134] libmachine: (test-preload-184006) Calling .DriverName
	I1031 18:42:42.508916  499771 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml
	I1031 18:42:42.508937  499771 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1031 18:42:42.508962  499771 main.go:134] libmachine: (test-preload-184006) Calling .GetSSHHostname
	I1031 18:42:42.511812  499771 main.go:134] libmachine: (test-preload-184006) DBG | domain test-preload-184006 has defined MAC address 52:54:00:b7:f5:32 in network mk-test-preload-184006
	I1031 18:42:42.512283  499771 main.go:134] libmachine: (test-preload-184006) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:f5:32", ip: ""} in network mk-test-preload-184006: {Iface:virbr1 ExpiryTime:2022-10-31 19:40:22 +0000 UTC Type:0 Mac:52:54:00:b7:f5:32 Iaid: IPaddr:192.168.39.32 Prefix:24 Hostname:test-preload-184006 Clientid:01:52:54:00:b7:f5:32}
	I1031 18:42:42.512327  499771 main.go:134] libmachine: (test-preload-184006) DBG | domain test-preload-184006 has defined IP address 192.168.39.32 and MAC address 52:54:00:b7:f5:32 in network mk-test-preload-184006
	I1031 18:42:42.512494  499771 main.go:134] libmachine: (test-preload-184006) Calling .GetSSHPort
	I1031 18:42:42.512698  499771 main.go:134] libmachine: (test-preload-184006) Calling .GetSSHKeyPath
	I1031 18:42:42.512856  499771 main.go:134] libmachine: (test-preload-184006) Calling .GetSSHUsername
	I1031 18:42:42.513089  499771 sshutil.go:53] new ssh client: &{IP:192.168.39.32 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15242-478932/.minikube/machines/test-preload-184006/id_rsa Username:docker}
	I1031 18:42:42.599184  499771 node_ready.go:35] waiting up to 6m0s for node "test-preload-184006" to be "Ready" ...
	I1031 18:42:42.599274  499771 start.go:806] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1031 18:42:42.602082  499771 node_ready.go:49] node "test-preload-184006" has status "Ready":"True"
	I1031 18:42:42.602104  499771 node_ready.go:38] duration metric: took 2.885921ms waiting for node "test-preload-184006" to be "Ready" ...
	I1031 18:42:42.602112  499771 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1031 18:42:42.609312  499771 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6d4b75cb6d-s4vgp" in "kube-system" namespace to be "Ready" ...
	I1031 18:42:42.614137  499771 pod_ready.go:92] pod "coredns-6d4b75cb6d-s4vgp" in "kube-system" namespace has status "Ready":"True"
	I1031 18:42:42.614155  499771 pod_ready.go:81] duration metric: took 4.816465ms waiting for pod "coredns-6d4b75cb6d-s4vgp" in "kube-system" namespace to be "Ready" ...
	I1031 18:42:42.614171  499771 pod_ready.go:78] waiting up to 6m0s for pod "etcd-test-preload-184006" in "kube-system" namespace to be "Ready" ...
	I1031 18:42:42.618480  499771 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1031 18:42:42.629556  499771 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1031 18:42:43.002363  499771 pod_ready.go:92] pod "etcd-test-preload-184006" in "kube-system" namespace has status "Ready":"True"
	I1031 18:42:43.002386  499771 pod_ready.go:81] duration metric: took 388.207194ms waiting for pod "etcd-test-preload-184006" in "kube-system" namespace to be "Ready" ...
	I1031 18:42:43.002395  499771 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-test-preload-184006" in "kube-system" namespace to be "Ready" ...
	I1031 18:42:43.419255  499771 pod_ready.go:92] pod "kube-apiserver-test-preload-184006" in "kube-system" namespace has status "Ready":"True"
	I1031 18:42:43.419289  499771 pod_ready.go:81] duration metric: took 416.886553ms waiting for pod "kube-apiserver-test-preload-184006" in "kube-system" namespace to be "Ready" ...
	I1031 18:42:43.419307  499771 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-test-preload-184006" in "kube-system" namespace to be "Ready" ...
	I1031 18:42:43.652485  499771 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.033970577s)
	I1031 18:42:43.652546  499771 main.go:134] libmachine: Making call to close driver server
	I1031 18:42:43.652561  499771 main.go:134] libmachine: (test-preload-184006) Calling .Close
	I1031 18:42:43.652832  499771 main.go:134] libmachine: Successfully made call to close driver server
	I1031 18:42:43.652857  499771 main.go:134] libmachine: Making call to close connection to plugin binary
	I1031 18:42:43.652882  499771 main.go:134] libmachine: Making call to close driver server
	I1031 18:42:43.652896  499771 main.go:134] libmachine: (test-preload-184006) Calling .Close
	I1031 18:42:43.653091  499771 main.go:134] libmachine: Successfully made call to close driver server
	I1031 18:42:43.653163  499771 main.go:134] libmachine: Making call to close connection to plugin binary
	I1031 18:42:43.653129  499771 main.go:134] libmachine: (test-preload-184006) DBG | Closing plugin on server side
	I1031 18:42:43.662804  499771 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.033220658s)
	I1031 18:42:43.662846  499771 main.go:134] libmachine: Making call to close driver server
	I1031 18:42:43.662858  499771 main.go:134] libmachine: (test-preload-184006) Calling .Close
	I1031 18:42:43.663126  499771 main.go:134] libmachine: (test-preload-184006) DBG | Closing plugin on server side
	I1031 18:42:43.663153  499771 main.go:134] libmachine: Successfully made call to close driver server
	I1031 18:42:43.663163  499771 main.go:134] libmachine: Making call to close connection to plugin binary
	I1031 18:42:43.663200  499771 main.go:134] libmachine: Making call to close driver server
	I1031 18:42:43.663215  499771 main.go:134] libmachine: (test-preload-184006) Calling .Close
	I1031 18:42:43.663406  499771 main.go:134] libmachine: Successfully made call to close driver server
	I1031 18:42:43.663417  499771 main.go:134] libmachine: Making call to close connection to plugin binary
	I1031 18:42:43.663434  499771 main.go:134] libmachine: Making call to close driver server
	I1031 18:42:43.663448  499771 main.go:134] libmachine: (test-preload-184006) Calling .Close
	I1031 18:42:43.663652  499771 main.go:134] libmachine: Successfully made call to close driver server
	I1031 18:42:43.663682  499771 main.go:134] libmachine: Making call to close connection to plugin binary
	I1031 18:42:43.663693  499771 main.go:134] libmachine: (test-preload-184006) DBG | Closing plugin on server side
	I1031 18:42:43.665855  499771 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1031 18:42:43.667256  499771 addons.go:414] enableAddons completed in 1.233876685s
	I1031 18:42:43.802289  499771 pod_ready.go:92] pod "kube-controller-manager-test-preload-184006" in "kube-system" namespace has status "Ready":"True"
	I1031 18:42:43.802315  499771 pod_ready.go:81] duration metric: took 382.999687ms waiting for pod "kube-controller-manager-test-preload-184006" in "kube-system" namespace to be "Ready" ...
	I1031 18:42:43.802324  499771 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-ccss7" in "kube-system" namespace to be "Ready" ...
	I1031 18:42:44.203031  499771 pod_ready.go:92] pod "kube-proxy-ccss7" in "kube-system" namespace has status "Ready":"True"
	I1031 18:42:44.203054  499771 pod_ready.go:81] duration metric: took 400.724938ms waiting for pod "kube-proxy-ccss7" in "kube-system" namespace to be "Ready" ...
	I1031 18:42:44.203063  499771 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-test-preload-184006" in "kube-system" namespace to be "Ready" ...
	I1031 18:42:44.602827  499771 pod_ready.go:92] pod "kube-scheduler-test-preload-184006" in "kube-system" namespace has status "Ready":"True"
	I1031 18:42:44.602853  499771 pod_ready.go:81] duration metric: took 399.783627ms waiting for pod "kube-scheduler-test-preload-184006" in "kube-system" namespace to be "Ready" ...
	I1031 18:42:44.602862  499771 pod_ready.go:38] duration metric: took 2.000742864s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1031 18:42:44.602879  499771 api_server.go:51] waiting for apiserver process to appear ...
	I1031 18:42:44.602924  499771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 18:42:44.620737  499771 api_server.go:71] duration metric: took 2.187276433s to wait for apiserver process to appear ...
	I1031 18:42:44.620769  499771 api_server.go:87] waiting for apiserver healthz status ...
	I1031 18:42:44.620780  499771 api_server.go:252] Checking apiserver healthz at https://192.168.39.32:8443/healthz ...
	I1031 18:42:44.626247  499771 api_server.go:278] https://192.168.39.32:8443/healthz returned 200:
	ok
	I1031 18:42:44.627262  499771 api_server.go:140] control plane version: v1.24.6
	I1031 18:42:44.627286  499771 api_server.go:130] duration metric: took 6.509034ms to wait for apiserver health ...
	I1031 18:42:44.627296  499771 system_pods.go:43] waiting for kube-system pods to appear ...
	I1031 18:42:44.808376  499771 system_pods.go:59] 7 kube-system pods found
	I1031 18:42:44.808413  499771 system_pods.go:61] "coredns-6d4b75cb6d-s4vgp" [21edc182-008f-46db-b8a8-459fba2a89af] Running
	I1031 18:42:44.808422  499771 system_pods.go:61] "etcd-test-preload-184006" [a9e52a87-e795-4f17-98e1-3bdaa8f915af] Running
	I1031 18:42:44.808429  499771 system_pods.go:61] "kube-apiserver-test-preload-184006" [5392d432-1d02-4bfe-ae4a-49fa90749667] Running
	I1031 18:42:44.808436  499771 system_pods.go:61] "kube-controller-manager-test-preload-184006" [831688a0-c9ce-4b19-844f-8a75367559c1] Running
	I1031 18:42:44.808442  499771 system_pods.go:61] "kube-proxy-ccss7" [066efc5a-560b-427f-bb8e-7775c6105205] Running
	I1031 18:42:44.808449  499771 system_pods.go:61] "kube-scheduler-test-preload-184006" [fe69dd65-61aa-408d-bed6-2017c705df4b] Running
	I1031 18:42:44.808455  499771 system_pods.go:61] "storage-provisioner" [65b625ba-eecc-4594-8502-3158c39c55da] Running
	I1031 18:42:44.808466  499771 system_pods.go:74] duration metric: took 181.163361ms to wait for pod list to return data ...
	I1031 18:42:44.808475  499771 default_sa.go:34] waiting for default service account to be created ...
	I1031 18:42:45.002444  499771 default_sa.go:45] found service account: "default"
	I1031 18:42:45.002486  499771 default_sa.go:55] duration metric: took 193.998947ms for default service account to be created ...
	I1031 18:42:45.002496  499771 system_pods.go:116] waiting for k8s-apps to be running ...
	I1031 18:42:45.205408  499771 system_pods.go:86] 7 kube-system pods found
	I1031 18:42:45.205443  499771 system_pods.go:89] "coredns-6d4b75cb6d-s4vgp" [21edc182-008f-46db-b8a8-459fba2a89af] Running
	I1031 18:42:45.205452  499771 system_pods.go:89] "etcd-test-preload-184006" [a9e52a87-e795-4f17-98e1-3bdaa8f915af] Running
	I1031 18:42:45.205460  499771 system_pods.go:89] "kube-apiserver-test-preload-184006" [5392d432-1d02-4bfe-ae4a-49fa90749667] Running
	I1031 18:42:45.205471  499771 system_pods.go:89] "kube-controller-manager-test-preload-184006" [831688a0-c9ce-4b19-844f-8a75367559c1] Running
	I1031 18:42:45.205477  499771 system_pods.go:89] "kube-proxy-ccss7" [066efc5a-560b-427f-bb8e-7775c6105205] Running
	I1031 18:42:45.205484  499771 system_pods.go:89] "kube-scheduler-test-preload-184006" [fe69dd65-61aa-408d-bed6-2017c705df4b] Running
	I1031 18:42:45.205491  499771 system_pods.go:89] "storage-provisioner" [65b625ba-eecc-4594-8502-3158c39c55da] Running
	I1031 18:42:45.205498  499771 system_pods.go:126] duration metric: took 202.997115ms to wait for k8s-apps to be running ...
	I1031 18:42:45.205509  499771 system_svc.go:44] waiting for kubelet service to be running ....
	I1031 18:42:45.205563  499771 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1031 18:42:45.218816  499771 system_svc.go:56] duration metric: took 13.300831ms WaitForService to wait for kubelet.
	I1031 18:42:45.218841  499771 kubeadm.go:573] duration metric: took 2.785385411s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1031 18:42:45.218858  499771 node_conditions.go:102] verifying NodePressure condition ...
	I1031 18:42:45.402078  499771 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1031 18:42:45.402109  499771 node_conditions.go:123] node cpu capacity is 2
	I1031 18:42:45.402118  499771 node_conditions.go:105] duration metric: took 183.256022ms to run NodePressure ...
	I1031 18:42:45.402128  499771 start.go:217] waiting for startup goroutines ...
	I1031 18:42:45.402397  499771 ssh_runner.go:195] Run: rm -f paused
	I1031 18:42:45.449561  499771 start.go:506] kubectl: 1.25.3, cluster: 1.24.6 (minor skew: 1)
	I1031 18:42:45.453078  499771 out.go:177] * Done! kubectl is now configured to use "test-preload-184006" cluster and "default" namespace by default
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	4c3d6e00fd028       0bb39497ab33b       4 seconds ago       Running             kube-proxy                0                   052428aac0379
	fee175faeff0a       6e38f40d628db       16 seconds ago      Running             storage-provisioner       1                   19dc8eadb006a
	7ea3bda047988       a4ca41631cc7a       17 seconds ago      Running             coredns                   2                   bce9ad3662cf7
	88c27849466fa       c6c20157a4233       23 seconds ago      Running             kube-controller-manager   0                   b2964ef99c557
	1a6471d3f84bf       c786c777a4e1c       23 seconds ago      Running             kube-scheduler            0                   f2f205c2827b1
	d5f791c2976b5       860f263331c95       23 seconds ago      Running             kube-apiserver            0                   ea4626d2cdff9
	e9858f06237c0       aebe758cef4cd       27 seconds ago      Running             etcd                      1                   c622cad5fac64
	9bcba87ab2642       a4ca41631cc7a       27 seconds ago      Exited              coredns                   1                   bce9ad3662cf7
	
	* 
	* ==> containerd <==
	* -- Journal begins at Mon 2022-10-31 18:40:18 UTC, ends at Mon 2022-10-31 18:42:46 UTC. --
	Oct 31 18:42:40 test-preload-184006 containerd[2556]: time="2022-10-31T18:42:40.866797110Z" level=info msg="Container to stop \"ecf66ff604dc67773d762bd59ead7b511b73692ba37299bc025044e4b1646c21\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Oct 31 18:42:40 test-preload-184006 containerd[2556]: time="2022-10-31T18:42:40.927327386Z" level=info msg="shim disconnected" id=bf1282f55024603e0fde8852645d9f4b681bb9318557e063f2c7185b94527097
	Oct 31 18:42:40 test-preload-184006 containerd[2556]: time="2022-10-31T18:42:40.927427273Z" level=warning msg="cleaning up after shim disconnected" id=bf1282f55024603e0fde8852645d9f4b681bb9318557e063f2c7185b94527097 namespace=k8s.io
	Oct 31 18:42:40 test-preload-184006 containerd[2556]: time="2022-10-31T18:42:40.927446358Z" level=info msg="cleaning up dead shim"
	Oct 31 18:42:40 test-preload-184006 containerd[2556]: time="2022-10-31T18:42:40.947731269Z" level=warning msg="cleanup warnings time=\"2022-10-31T18:42:40Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3813 runtime=io.containerd.runc.v2\n"
	Oct 31 18:42:40 test-preload-184006 containerd[2556]: time="2022-10-31T18:42:40.948050155Z" level=info msg="TearDown network for sandbox \"bf1282f55024603e0fde8852645d9f4b681bb9318557e063f2c7185b94527097\" successfully"
	Oct 31 18:42:40 test-preload-184006 containerd[2556]: time="2022-10-31T18:42:40.948126423Z" level=info msg="StopPodSandbox for \"bf1282f55024603e0fde8852645d9f4b681bb9318557e063f2c7185b94527097\" returns successfully"
	Oct 31 18:42:41 test-preload-184006 containerd[2556]: time="2022-10-31T18:42:41.219677158Z" level=info msg="RemoveContainer for \"ecf66ff604dc67773d762bd59ead7b511b73692ba37299bc025044e4b1646c21\""
	Oct 31 18:42:41 test-preload-184006 containerd[2556]: time="2022-10-31T18:42:41.236696613Z" level=info msg="RemoveContainer for \"ecf66ff604dc67773d762bd59ead7b511b73692ba37299bc025044e4b1646c21\" returns successfully"
	Oct 31 18:42:41 test-preload-184006 containerd[2556]: time="2022-10-31T18:42:41.238124268Z" level=error msg="ContainerStatus for \"ecf66ff604dc67773d762bd59ead7b511b73692ba37299bc025044e4b1646c21\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ecf66ff604dc67773d762bd59ead7b511b73692ba37299bc025044e4b1646c21\": not found"
	Oct 31 18:42:41 test-preload-184006 containerd[2556]: time="2022-10-31T18:42:41.583889586Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ccss7,Uid:066efc5a-560b-427f-bb8e-7775c6105205,Namespace:kube-system,Attempt:0,}"
	Oct 31 18:42:41 test-preload-184006 containerd[2556]: time="2022-10-31T18:42:41.616391124Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 31 18:42:41 test-preload-184006 containerd[2556]: time="2022-10-31T18:42:41.616475540Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 31 18:42:41 test-preload-184006 containerd[2556]: time="2022-10-31T18:42:41.616488127Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 31 18:42:41 test-preload-184006 containerd[2556]: time="2022-10-31T18:42:41.617031308Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/052428aac03794cd9f0f80f501f4074b476cf2f910debea3904dd9035b4ef857 pid=3837 runtime=io.containerd.runc.v2
	Oct 31 18:42:41 test-preload-184006 containerd[2556]: time="2022-10-31T18:42:41.678114521Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ccss7,Uid:066efc5a-560b-427f-bb8e-7775c6105205,Namespace:kube-system,Attempt:0,} returns sandbox id \"052428aac03794cd9f0f80f501f4074b476cf2f910debea3904dd9035b4ef857\""
	Oct 31 18:42:41 test-preload-184006 containerd[2556]: time="2022-10-31T18:42:41.686197673Z" level=info msg="CreateContainer within sandbox \"052428aac03794cd9f0f80f501f4074b476cf2f910debea3904dd9035b4ef857\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}"
	Oct 31 18:42:41 test-preload-184006 containerd[2556]: time="2022-10-31T18:42:41.712827441Z" level=info msg="CreateContainer within sandbox \"052428aac03794cd9f0f80f501f4074b476cf2f910debea3904dd9035b4ef857\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"4c3d6e00fd028d1cbe49f2b7e98e190b6bd3dbe07f80b172c6d51b9bc33b5f53\""
	Oct 31 18:42:41 test-preload-184006 containerd[2556]: time="2022-10-31T18:42:41.715657933Z" level=info msg="StartContainer for \"4c3d6e00fd028d1cbe49f2b7e98e190b6bd3dbe07f80b172c6d51b9bc33b5f53\""
	Oct 31 18:42:41 test-preload-184006 containerd[2556]: time="2022-10-31T18:42:41.808369863Z" level=info msg="StartContainer for \"4c3d6e00fd028d1cbe49f2b7e98e190b6bd3dbe07f80b172c6d51b9bc33b5f53\" returns successfully"
	Oct 31 18:42:42 test-preload-184006 containerd[2556]: time="2022-10-31T18:42:42.129815361Z" level=info msg="StopContainer for \"ecf66ff604dc67773d762bd59ead7b511b73692ba37299bc025044e4b1646c21\" with timeout 1 (s)"
	Oct 31 18:42:42 test-preload-184006 containerd[2556]: time="2022-10-31T18:42:42.129907327Z" level=error msg="StopContainer for \"ecf66ff604dc67773d762bd59ead7b511b73692ba37299bc025044e4b1646c21\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ecf66ff604dc67773d762bd59ead7b511b73692ba37299bc025044e4b1646c21\": not found"
	Oct 31 18:42:42 test-preload-184006 containerd[2556]: time="2022-10-31T18:42:42.130959791Z" level=info msg="StopPodSandbox for \"bf1282f55024603e0fde8852645d9f4b681bb9318557e063f2c7185b94527097\""
	Oct 31 18:42:42 test-preload-184006 containerd[2556]: time="2022-10-31T18:42:42.131071801Z" level=info msg="TearDown network for sandbox \"bf1282f55024603e0fde8852645d9f4b681bb9318557e063f2c7185b94527097\" successfully"
	Oct 31 18:42:42 test-preload-184006 containerd[2556]: time="2022-10-31T18:42:42.131133005Z" level=info msg="StopPodSandbox for \"bf1282f55024603e0fde8852645d9f4b681bb9318557e063f2c7185b94527097\" returns successfully"
	
	* 
	* ==> coredns [7ea3bda04798831c460160885dde4b1b71039dc7d27cbfe3254173d78186c7b5] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = 8f51b271a18f2ce6fcaee5f1cfda3ed0
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	
	* 
	* ==> coredns [9bcba87ab2642bce25a8e6ff58dbd95a33822d0041d733e9682521ae2d8532a2] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	
	* 
	* ==> describe nodes <==
	* Name:               test-preload-184006
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-184006
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c34ec3182cacd96a3e168acffe335374d66b10cc
	                    minikube.k8s.io/name=test-preload-184006
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_10_31T18_41_06_0700
	                    minikube.k8s.io/version=v1.27.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 31 Oct 2022 18:41:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-184006
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 31 Oct 2022 18:42:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 31 Oct 2022 18:42:27 +0000   Mon, 31 Oct 2022 18:40:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 31 Oct 2022 18:42:27 +0000   Mon, 31 Oct 2022 18:40:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 31 Oct 2022 18:42:27 +0000   Mon, 31 Oct 2022 18:40:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 31 Oct 2022 18:42:27 +0000   Mon, 31 Oct 2022 18:41:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.32
	  Hostname:    test-preload-184006
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 dc05c6a30b104b42b8aac7b69a3b0c29
	  System UUID:                dc05c6a3-0b10-4b42-b8aa-c7b69a3b0c29
	  Boot ID:                    969fb93f-12ab-47aa-b9c5-e54f54a40acc
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.6.8
	  Kubelet Version:            v1.24.6
	  Kube-Proxy Version:         v1.24.6
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-s4vgp                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     87s
	  kube-system                 etcd-test-preload-184006                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         100s
	  kube-system                 kube-apiserver-test-preload-184006             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18s
	  kube-system                 kube-controller-manager-test-preload-184006    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18s
	  kube-system                 kube-proxy-ccss7                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5s
	  kube-system                 kube-scheduler-test-preload-184006             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18s
	  kube-system                 storage-provisioner                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         84s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 85s                  kube-proxy       
	  Normal  Starting                 15s                  kube-proxy       
	  Normal  Starting                 4s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  110s (x5 over 110s)  kubelet          Node test-preload-184006 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     110s (x5 over 110s)  kubelet          Node test-preload-184006 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    110s (x5 over 110s)  kubelet          Node test-preload-184006 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 101s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  100s                 kubelet          Node test-preload-184006 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    100s                 kubelet          Node test-preload-184006 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     100s                 kubelet          Node test-preload-184006 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  100s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                90s                  kubelet          Node test-preload-184006 status is now: NodeReady
	  Normal  RegisteredNode           87s                  node-controller  Node test-preload-184006 event: Registered Node test-preload-184006 in Controller
	  Normal  Starting                 25s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  24s (x8 over 24s)    kubelet          Node test-preload-184006 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    24s (x8 over 24s)    kubelet          Node test-preload-184006 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     24s (x7 over 24s)    kubelet          Node test-preload-184006 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  24s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6s                   node-controller  Node test-preload-184006 event: Registered Node test-preload-184006 in Controller
	
	* 
	* ==> dmesg <==
	* [Oct31 18:40] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.069802] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +3.849712] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.036233] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.142445] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +5.037114] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.754527] systemd-fstab-generator[548]: Ignoring "noauto" for root device
	[  +0.096400] systemd-fstab-generator[559]: Ignoring "noauto" for root device
	[  +0.196665] systemd-fstab-generator[582]: Ignoring "noauto" for root device
	[ +24.681028] systemd-fstab-generator[983]: Ignoring "noauto" for root device
	[Oct31 18:41] systemd-fstab-generator[1375]: Ignoring "noauto" for root device
	[ +14.626625] kauditd_printk_skb: 7 callbacks suppressed
	[ +11.348325] kauditd_printk_skb: 22 callbacks suppressed
	[Oct31 18:42] systemd-fstab-generator[2412]: Ignoring "noauto" for root device
	[  +0.264829] systemd-fstab-generator[2452]: Ignoring "noauto" for root device
	[  +0.147829] systemd-fstab-generator[2464]: Ignoring "noauto" for root device
	[  +0.248852] systemd-fstab-generator[2518]: Ignoring "noauto" for root device
	[  +6.771357] systemd-fstab-generator[3199]: Ignoring "noauto" for root device
	
	* 
	* ==> etcd [e9858f06237c0e4abd2f9cccebe32689cbb5f07fd94fdab516df3b52652d1628] <==
	* {"level":"info","ts":"2022-10-31T18:42:20.194Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68bdcbcbc4b793bb","local-member-id":"d4c05646b7156589","cluster-version":"3.5"}
	{"level":"info","ts":"2022-10-31T18:42:20.194Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-10-31T18:42:20.196Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-10-31T18:42:20.196Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"d4c05646b7156589","initial-advertise-peer-urls":["https://192.168.39.32:2380"],"listen-peer-urls":["https://192.168.39.32:2380"],"advertise-client-urls":["https://192.168.39.32:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.32:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-10-31T18:42:20.197Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-10-31T18:42:20.197Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.32:2380"}
	{"level":"info","ts":"2022-10-31T18:42:20.197Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.32:2380"}
	{"level":"info","ts":"2022-10-31T18:42:22.049Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4c05646b7156589 is starting a new election at term 2"}
	{"level":"info","ts":"2022-10-31T18:42:22.050Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4c05646b7156589 became pre-candidate at term 2"}
	{"level":"info","ts":"2022-10-31T18:42:22.050Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4c05646b7156589 received MsgPreVoteResp from d4c05646b7156589 at term 2"}
	{"level":"info","ts":"2022-10-31T18:42:22.050Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4c05646b7156589 became candidate at term 3"}
	{"level":"info","ts":"2022-10-31T18:42:22.050Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4c05646b7156589 received MsgVoteResp from d4c05646b7156589 at term 3"}
	{"level":"info","ts":"2022-10-31T18:42:22.050Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4c05646b7156589 became leader at term 3"}
	{"level":"info","ts":"2022-10-31T18:42:22.050Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: d4c05646b7156589 elected leader d4c05646b7156589 at term 3"}
	{"level":"info","ts":"2022-10-31T18:42:22.053Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"d4c05646b7156589","local-member-attributes":"{Name:test-preload-184006 ClientURLs:[https://192.168.39.32:2379]}","request-path":"/0/members/d4c05646b7156589/attributes","cluster-id":"68bdcbcbc4b793bb","publish-timeout":"7s"}
	{"level":"info","ts":"2022-10-31T18:42:22.054Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-10-31T18:42:22.054Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-10-31T18:42:22.055Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-10-31T18:42:22.058Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.32:2379"}
	{"level":"info","ts":"2022-10-31T18:42:22.065Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-10-31T18:42:22.065Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"warn","ts":"2022-10-31T18:42:35.488Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"216.297933ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-controller-manager-test-preload-184006\" ","response":"range_response_count:1 size:6386"}
	{"level":"info","ts":"2022-10-31T18:42:35.488Z","caller":"traceutil/trace.go:171","msg":"trace[1558534492] range","detail":"{range_begin:/registry/pods/kube-system/kube-controller-manager-test-preload-184006; range_end:; response_count:1; response_revision:523; }","duration":"216.600002ms","start":"2022-10-31T18:42:35.272Z","end":"2022-10-31T18:42:35.488Z","steps":["trace[1558534492] 'agreement among raft nodes before linearized reading'  (duration: 16.583563ms)","trace[1558534492] 'range keys from in-memory index tree'  (duration: 199.666731ms)"],"step_count":2}
	{"level":"warn","ts":"2022-10-31T18:42:35.884Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"262.182969ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-controller-manager-test-preload-184006\" ","response":"range_response_count:1 size:6386"}
	{"level":"info","ts":"2022-10-31T18:42:35.884Z","caller":"traceutil/trace.go:171","msg":"trace[1560297044] range","detail":"{range_begin:/registry/pods/kube-system/kube-controller-manager-test-preload-184006; range_end:; response_count:1; response_revision:524; }","duration":"262.599069ms","start":"2022-10-31T18:42:35.622Z","end":"2022-10-31T18:42:35.884Z","steps":["trace[1560297044] 'agreement among raft nodes before linearized reading'  (duration: 20.319391ms)","trace[1560297044] 'range keys from in-memory index tree'  (duration: 241.808269ms)"],"step_count":2}
	
	* 
	* ==> kernel <==
	*  18:42:46 up 2 min,  0 users,  load average: 1.28, 0.64, 0.24
	Linux test-preload-184006 5.10.57 #1 SMP Wed Oct 19 23:03:20 UTC 2022 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [d5f791c2976b5bf95b93863d763f985c0eb05ade7844335bc924094d764da0f4] <==
	* I1031 18:42:27.586353       1 shared_informer.go:255] Waiting for caches to sync for cluster_authentication_trust_controller
	I1031 18:42:27.587022       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1031 18:42:27.587166       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I1031 18:42:27.587312       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
	I1031 18:42:27.587474       1 shared_informer.go:255] Waiting for caches to sync for crd-autoregister
	E1031 18:42:27.661288       1 controller.go:169] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I1031 18:42:27.686564       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I1031 18:42:27.687033       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I1031 18:42:27.690663       1 shared_informer.go:262] Caches are synced for node_authorizer
	I1031 18:42:27.691109       1 cache.go:39] Caches are synced for autoregister controller
	I1031 18:42:27.691618       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1031 18:42:27.695549       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1031 18:42:27.708198       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I1031 18:42:27.729595       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I1031 18:42:28.185733       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1031 18:42:28.508371       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1031 18:42:29.243889       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I1031 18:42:29.252695       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I1031 18:42:29.311492       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I1031 18:42:29.334084       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1031 18:42:29.343740       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1031 18:42:31.247053       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I1031 18:42:40.650224       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I1031 18:42:40.701157       1 controller.go:611] quota admission added evaluator for: endpoints
	I1031 18:42:40.853613       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	* 
	* ==> kube-controller-manager [88c27849466fa330f2a3cbba0360bcd67afc16b223fb8ba02554dc689ebac92e] <==
	* I1031 18:42:40.636675       1 shared_informer.go:262] Caches are synced for bootstrap_signer
	I1031 18:42:40.640789       1 shared_informer.go:262] Caches are synced for job
	I1031 18:42:40.642220       1 shared_informer.go:262] Caches are synced for deployment
	I1031 18:42:40.645739       1 shared_informer.go:262] Caches are synced for disruption
	I1031 18:42:40.646391       1 disruption.go:371] Sending events to api server.
	I1031 18:42:40.646253       1 shared_informer.go:262] Caches are synced for ReplicaSet
	I1031 18:42:40.646274       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I1031 18:42:40.649143       1 shared_informer.go:262] Caches are synced for TTL
	I1031 18:42:40.653281       1 shared_informer.go:262] Caches are synced for crt configmap
	I1031 18:42:40.655859       1 shared_informer.go:262] Caches are synced for GC
	I1031 18:42:40.658665       1 shared_informer.go:262] Caches are synced for service account
	I1031 18:42:40.664886       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kube-proxy-j9mw4"
	I1031 18:42:40.762916       1 shared_informer.go:262] Caches are synced for ClusterRoleAggregator
	I1031 18:42:40.800378       1 shared_informer.go:262] Caches are synced for resource quota
	I1031 18:42:40.815919       1 shared_informer.go:262] Caches are synced for resource quota
	I1031 18:42:40.828182       1 shared_informer.go:262] Caches are synced for persistent volume
	I1031 18:42:40.830488       1 shared_informer.go:262] Caches are synced for stateful set
	I1031 18:42:40.845624       1 shared_informer.go:262] Caches are synced for expand
	I1031 18:42:40.846137       1 shared_informer.go:262] Caches are synced for attach detach
	I1031 18:42:40.847761       1 shared_informer.go:262] Caches are synced for PVC protection
	I1031 18:42:40.857889       1 shared_informer.go:262] Caches are synced for ephemeral
	I1031 18:42:41.261640       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-ccss7"
	I1031 18:42:41.277127       1 shared_informer.go:262] Caches are synced for garbage collector
	I1031 18:42:41.295188       1 shared_informer.go:262] Caches are synced for garbage collector
	I1031 18:42:41.295322       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	
	* 
	* ==> kube-proxy [4c3d6e00fd028d1cbe49f2b7e98e190b6bd3dbe07f80b172c6d51b9bc33b5f53] <==
	* I1031 18:42:41.925149       1 node.go:163] Successfully retrieved node IP: 192.168.39.32
	I1031 18:42:41.925397       1 server_others.go:138] "Detected node IP" address="192.168.39.32"
	I1031 18:42:41.925709       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I1031 18:42:41.969469       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I1031 18:42:41.969486       1 server_others.go:206] "Using iptables Proxier"
	I1031 18:42:41.969592       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I1031 18:42:41.969956       1 server.go:661] "Version info" version="v1.24.6"
	I1031 18:42:41.969965       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1031 18:42:41.971438       1 config.go:317] "Starting service config controller"
	I1031 18:42:41.971871       1 shared_informer.go:255] Waiting for caches to sync for service config
	I1031 18:42:41.972011       1 config.go:226] "Starting endpoint slice config controller"
	I1031 18:42:41.972159       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I1031 18:42:41.973369       1 config.go:444] "Starting node config controller"
	I1031 18:42:41.973676       1 shared_informer.go:255] Waiting for caches to sync for node config
	I1031 18:42:42.072416       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I1031 18:42:42.072658       1 shared_informer.go:262] Caches are synced for service config
	I1031 18:42:42.074612       1 shared_informer.go:262] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [1a6471d3f84bfe9e9144e1680cf0af6b8ffd1e68ac20f12d3002434a5b1e6255] <==
	* I1031 18:42:25.363431       1 serving.go:348] Generated self-signed cert in-memory
	W1031 18:42:27.597026       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1031 18:42:27.597275       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1031 18:42:27.597456       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1031 18:42:27.597570       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1031 18:42:27.662673       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.6"
	I1031 18:42:27.662801       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1031 18:42:27.666655       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1031 18:42:27.667721       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1031 18:42:27.668672       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1031 18:42:27.668944       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1031 18:42:27.769608       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Mon 2022-10-31 18:40:18 UTC, ends at Mon 2022-10-31 18:42:47 UTC. --
	Oct 31 18:42:41 test-preload-184006 kubelet[3205]: I1031 18:42:41.074779    3205 reconciler.go:201] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wsc8z\" (UniqueName: \"kubernetes.io/projected/13e663e3-979f-4420-88bb-b73be1318571-kube-api-access-wsc8z\") pod \"13e663e3-979f-4420-88bb-b73be1318571\" (UID: \"13e663e3-979f-4420-88bb-b73be1318571\") "
	Oct 31 18:42:41 test-preload-184006 kubelet[3205]: I1031 18:42:41.074809    3205 reconciler.go:201] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/13e663e3-979f-4420-88bb-b73be1318571-xtables-lock\") pod \"13e663e3-979f-4420-88bb-b73be1318571\" (UID: \"13e663e3-979f-4420-88bb-b73be1318571\") "
	Oct 31 18:42:41 test-preload-184006 kubelet[3205]: I1031 18:42:41.074837    3205 reconciler.go:201] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/13e663e3-979f-4420-88bb-b73be1318571-lib-modules\") pod \"13e663e3-979f-4420-88bb-b73be1318571\" (UID: \"13e663e3-979f-4420-88bb-b73be1318571\") "
	Oct 31 18:42:41 test-preload-184006 kubelet[3205]: I1031 18:42:41.074922    3205 operation_generator.go:863] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/13e663e3-979f-4420-88bb-b73be1318571-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "13e663e3-979f-4420-88bb-b73be1318571" (UID: "13e663e3-979f-4420-88bb-b73be1318571"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Oct 31 18:42:41 test-preload-184006 kubelet[3205]: I1031 18:42:41.075742    3205 operation_generator.go:863] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/13e663e3-979f-4420-88bb-b73be1318571-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "13e663e3-979f-4420-88bb-b73be1318571" (UID: "13e663e3-979f-4420-88bb-b73be1318571"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Oct 31 18:42:41 test-preload-184006 kubelet[3205]: W1031 18:42:41.075934    3205 empty_dir.go:519] Warning: Failed to clear quota on /var/lib/kubelet/pods/13e663e3-979f-4420-88bb-b73be1318571/volumes/kubernetes.io~configmap/kube-proxy: clearQuota called, but quotas disabled
	Oct 31 18:42:41 test-preload-184006 kubelet[3205]: I1031 18:42:41.076380    3205 operation_generator.go:863] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13e663e3-979f-4420-88bb-b73be1318571-kube-proxy" (OuterVolumeSpecName: "kube-proxy") pod "13e663e3-979f-4420-88bb-b73be1318571" (UID: "13e663e3-979f-4420-88bb-b73be1318571"). InnerVolumeSpecName "kube-proxy". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	Oct 31 18:42:41 test-preload-184006 kubelet[3205]: I1031 18:42:41.082598    3205 operation_generator.go:863] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13e663e3-979f-4420-88bb-b73be1318571-kube-api-access-wsc8z" (OuterVolumeSpecName: "kube-api-access-wsc8z") pod "13e663e3-979f-4420-88bb-b73be1318571" (UID: "13e663e3-979f-4420-88bb-b73be1318571"). InnerVolumeSpecName "kube-api-access-wsc8z". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Oct 31 18:42:41 test-preload-184006 kubelet[3205]: I1031 18:42:41.175583    3205 reconciler.go:384] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/13e663e3-979f-4420-88bb-b73be1318571-lib-modules\") on node \"test-preload-184006\" DevicePath \"\""
	Oct 31 18:42:41 test-preload-184006 kubelet[3205]: I1031 18:42:41.175645    3205 reconciler.go:384] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/13e663e3-979f-4420-88bb-b73be1318571-xtables-lock\") on node \"test-preload-184006\" DevicePath \"\""
	Oct 31 18:42:41 test-preload-184006 kubelet[3205]: I1031 18:42:41.175658    3205 reconciler.go:384] "Volume detached for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/13e663e3-979f-4420-88bb-b73be1318571-kube-proxy\") on node \"test-preload-184006\" DevicePath \"\""
	Oct 31 18:42:41 test-preload-184006 kubelet[3205]: I1031 18:42:41.175674    3205 reconciler.go:384] "Volume detached for volume \"kube-api-access-wsc8z\" (UniqueName: \"kubernetes.io/projected/13e663e3-979f-4420-88bb-b73be1318571-kube-api-access-wsc8z\") on node \"test-preload-184006\" DevicePath \"\""
	Oct 31 18:42:41 test-preload-184006 kubelet[3205]: I1031 18:42:41.212854    3205 scope.go:110] "RemoveContainer" containerID="ecf66ff604dc67773d762bd59ead7b511b73692ba37299bc025044e4b1646c21"
	Oct 31 18:42:41 test-preload-184006 kubelet[3205]: I1031 18:42:41.237739    3205 scope.go:110] "RemoveContainer" containerID="ecf66ff604dc67773d762bd59ead7b511b73692ba37299bc025044e4b1646c21"
	Oct 31 18:42:41 test-preload-184006 kubelet[3205]: E1031 18:42:41.238362    3205 remote_runtime.go:604] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ecf66ff604dc67773d762bd59ead7b511b73692ba37299bc025044e4b1646c21\": not found" containerID="ecf66ff604dc67773d762bd59ead7b511b73692ba37299bc025044e4b1646c21"
	Oct 31 18:42:41 test-preload-184006 kubelet[3205]: I1031 18:42:41.238425    3205 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:containerd ID:ecf66ff604dc67773d762bd59ead7b511b73692ba37299bc025044e4b1646c21} err="failed to get container status \"ecf66ff604dc67773d762bd59ead7b511b73692ba37299bc025044e4b1646c21\": rpc error: code = NotFound desc = an error occurred when try to find container \"ecf66ff604dc67773d762bd59ead7b511b73692ba37299bc025044e4b1646c21\": not found"
	Oct 31 18:42:41 test-preload-184006 kubelet[3205]: I1031 18:42:41.275870    3205 topology_manager.go:200] "Topology Admit Handler"
	Oct 31 18:42:41 test-preload-184006 kubelet[3205]: E1031 18:42:41.275983    3205 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="13e663e3-979f-4420-88bb-b73be1318571" containerName="kube-proxy"
	Oct 31 18:42:41 test-preload-184006 kubelet[3205]: I1031 18:42:41.276025    3205 memory_manager.go:345] "RemoveStaleState removing state" podUID="13e663e3-979f-4420-88bb-b73be1318571" containerName="kube-proxy"
	Oct 31 18:42:41 test-preload-184006 kubelet[3205]: I1031 18:42:41.378379    3205 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/066efc5a-560b-427f-bb8e-7775c6105205-xtables-lock\") pod \"kube-proxy-ccss7\" (UID: \"066efc5a-560b-427f-bb8e-7775c6105205\") " pod="kube-system/kube-proxy-ccss7"
	Oct 31 18:42:41 test-preload-184006 kubelet[3205]: I1031 18:42:41.378644    3205 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/066efc5a-560b-427f-bb8e-7775c6105205-lib-modules\") pod \"kube-proxy-ccss7\" (UID: \"066efc5a-560b-427f-bb8e-7775c6105205\") " pod="kube-system/kube-proxy-ccss7"
	Oct 31 18:42:41 test-preload-184006 kubelet[3205]: I1031 18:42:41.378721    3205 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6p27d\" (UniqueName: \"kubernetes.io/projected/066efc5a-560b-427f-bb8e-7775c6105205-kube-api-access-6p27d\") pod \"kube-proxy-ccss7\" (UID: \"066efc5a-560b-427f-bb8e-7775c6105205\") " pod="kube-system/kube-proxy-ccss7"
	Oct 31 18:42:41 test-preload-184006 kubelet[3205]: I1031 18:42:41.378778    3205 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/066efc5a-560b-427f-bb8e-7775c6105205-kube-proxy\") pod \"kube-proxy-ccss7\" (UID: \"066efc5a-560b-427f-bb8e-7775c6105205\") " pod="kube-system/kube-proxy-ccss7"
	Oct 31 18:42:42 test-preload-184006 kubelet[3205]: E1031 18:42:42.130158    3205 remote_runtime.go:484] "StopContainer from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ecf66ff604dc67773d762bd59ead7b511b73692ba37299bc025044e4b1646c21\": not found" containerID="ecf66ff604dc67773d762bd59ead7b511b73692ba37299bc025044e4b1646c21"
	Oct 31 18:42:42 test-preload-184006 kubelet[3205]: I1031 18:42:42.132656    3205 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=13e663e3-979f-4420-88bb-b73be1318571 path="/var/lib/kubelet/pods/13e663e3-979f-4420-88bb-b73be1318571/volumes"
	
	* 
	* ==> storage-provisioner [fee175faeff0a82a9f773ebd16081a3ad44b9751865083a39c8191b666bbaf5b] <==
	* I1031 18:42:30.078533       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1031 18:42:30.094653       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1031 18:42:30.095185       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-184006 -n test-preload-184006
helpers_test.go:261: (dbg) Run:  kubectl --context test-preload-184006 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: 
helpers_test.go:272: ======> post-mortem[TestPreload]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context test-preload-184006 describe pod 
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context test-preload-184006 describe pod : exit status 1 (46.745362ms)

                                                
                                                
** stderr ** 
	error: resource name may not be empty

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context test-preload-184006 describe pod : exit status 1
helpers_test.go:175: Cleaning up "test-preload-184006" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-184006
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-184006: (1.234265172s)
--- FAIL: TestPreload (161.76s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (186.62s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:190: (dbg) Run:  /tmp/minikube-v1.16.0.2927453459.exe start -p stopped-upgrade-184858 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:190: (dbg) Done: /tmp/minikube-v1.16.0.2927453459.exe start -p stopped-upgrade-184858 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd: (1m39.147018142s)
version_upgrade_test.go:199: (dbg) Run:  /tmp/minikube-v1.16.0.2927453459.exe -p stopped-upgrade-184858 stop
version_upgrade_test.go:199: (dbg) Done: /tmp/minikube-v1.16.0.2927453459.exe -p stopped-upgrade-184858 stop: (2.109595677s)
version_upgrade_test.go:205: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-184858 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p stopped-upgrade-184858 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: exit status 90 (1m25.350039787s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-184858] minikube v1.27.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15242
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15242-478932/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15242-478932/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Kubernetes 1.25.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.25.3
	* Using the kvm2 driver based on existing profile
	* Starting control plane node stopped-upgrade-184858 in cluster stopped-upgrade-184858
	* Restarting existing kvm2 VM for "stopped-upgrade-184858" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1031 18:50:40.766930  504574 out.go:296] Setting OutFile to fd 1 ...
	I1031 18:50:40.767109  504574 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 18:50:40.767120  504574 out.go:309] Setting ErrFile to fd 2...
	I1031 18:50:40.767133  504574 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 18:50:40.767338  504574 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15242-478932/.minikube/bin
	I1031 18:50:40.768250  504574 out.go:303] Setting JSON to false
	I1031 18:50:40.769736  504574 start.go:116] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":9194,"bootTime":1667233047,"procs":241,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1021-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1031 18:50:40.769803  504574 start.go:126] virtualization: kvm guest
	I1031 18:50:40.772535  504574 out.go:177] * [stopped-upgrade-184858] minikube v1.27.1 on Ubuntu 20.04 (kvm/amd64)
	I1031 18:50:40.774343  504574 notify.go:220] Checking for updates...
	I1031 18:50:40.774351  504574 out.go:177]   - MINIKUBE_LOCATION=15242
	I1031 18:50:40.775974  504574 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1031 18:50:40.777655  504574 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/15242-478932/kubeconfig
	I1031 18:50:40.779540  504574 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/15242-478932/.minikube
	I1031 18:50:40.781127  504574 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1031 18:50:40.783109  504574 config.go:180] Loaded profile config "stopped-upgrade-184858": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I1031 18:50:40.783680  504574 main.go:134] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1031 18:50:40.783770  504574 main.go:134] libmachine: Launching plugin server for driver kvm2
	I1031 18:50:40.803201  504574 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:39309
	I1031 18:50:40.803708  504574 main.go:134] libmachine: () Calling .GetVersion
	I1031 18:50:40.804346  504574 main.go:134] libmachine: Using API Version  1
	I1031 18:50:40.804372  504574 main.go:134] libmachine: () Calling .SetConfigRaw
	I1031 18:50:40.804790  504574 main.go:134] libmachine: () Calling .GetMachineName
	I1031 18:50:40.804996  504574 main.go:134] libmachine: (stopped-upgrade-184858) Calling .DriverName
	I1031 18:50:40.806925  504574 out.go:177] * Kubernetes 1.25.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.25.3
	I1031 18:50:40.808277  504574 driver.go:365] Setting default libvirt URI to qemu:///system
	I1031 18:50:40.808696  504574 main.go:134] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1031 18:50:40.808748  504574 main.go:134] libmachine: Launching plugin server for driver kvm2
	I1031 18:50:40.825819  504574 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:43373
	I1031 18:50:40.826320  504574 main.go:134] libmachine: () Calling .GetVersion
	I1031 18:50:40.826861  504574 main.go:134] libmachine: Using API Version  1
	I1031 18:50:40.826889  504574 main.go:134] libmachine: () Calling .SetConfigRaw
	I1031 18:50:40.827300  504574 main.go:134] libmachine: () Calling .GetMachineName
	I1031 18:50:40.827516  504574 main.go:134] libmachine: (stopped-upgrade-184858) Calling .DriverName
	I1031 18:50:40.868289  504574 out.go:177] * Using the kvm2 driver based on existing profile
	I1031 18:50:40.869633  504574 start.go:282] selected driver: kvm2
	I1031 18:50:40.869662  504574 start.go:808] validating driver "kvm2" against &{Name:stopped-upgrade-184858 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.16.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.15-snapshot4@sha256:ef1f485b5a1cfa4c989bc05e153f0a8525968ec999e242efff871cbb31649c16 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:stopped-up
grade-184858 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.144 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:}
	I1031 18:50:40.869785  504574 start.go:819] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1031 18:50:40.870674  504574 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1031 18:50:40.870935  504574 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/15242-478932/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1031 18:50:40.887313  504574 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.27.1
	I1031 18:50:40.887677  504574 cni.go:95] Creating CNI manager for ""
	I1031 18:50:40.887695  504574 cni.go:165] "kvm2" driver + containerd runtime found, recommending bridge
	I1031 18:50:40.887706  504574 start_flags.go:317] config:
	{Name:stopped-upgrade-184858 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.16.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.15-snapshot4@sha256:ef1f485b5a1cfa4c989bc05e153f0a8525968ec999e242efff871cbb31649c16 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:stopped-upgrade-184858 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.144 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:}
	I1031 18:50:40.887830  504574 iso.go:124] acquiring lock: {Name:mk75bc6a3e159cb2de2b5f76a06013b9e3e93a7b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1031 18:50:40.890069  504574 out.go:177] * Starting control plane node stopped-upgrade-184858 in cluster stopped-upgrade-184858
	I1031 18:50:40.891338  504574 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I1031 18:50:40.891381  504574 preload.go:148] Found local preload: /home/jenkins/minikube-integration/15242-478932/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4
	I1031 18:50:40.891404  504574 cache.go:57] Caching tarball of preloaded images
	I1031 18:50:40.891541  504574 preload.go:174] Found /home/jenkins/minikube-integration/15242-478932/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1031 18:50:40.891561  504574 cache.go:60] Finished verifying existence of preloaded tar for  v1.20.0 on containerd
	I1031 18:50:40.891719  504574 profile.go:148] Saving config to /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/stopped-upgrade-184858/config.json ...
	I1031 18:50:40.891911  504574 cache.go:208] Successfully downloaded all kic artifacts
	I1031 18:50:40.891934  504574 start.go:364] acquiring machines lock for stopped-upgrade-184858: {Name:mk0dc8caaddb19c1678d177d08b6bcca15077d40 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1031 18:50:40.892031  504574 start.go:368] acquired machines lock for "stopped-upgrade-184858" in 76.941µs
	I1031 18:50:40.892052  504574 start.go:96] Skipping create...Using existing machine configuration
	I1031 18:50:40.892060  504574 fix.go:55] fixHost starting: 
	I1031 18:50:40.892398  504574 main.go:134] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1031 18:50:40.892444  504574 main.go:134] libmachine: Launching plugin server for driver kvm2
	I1031 18:50:40.908924  504574 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:42039
	I1031 18:50:40.909459  504574 main.go:134] libmachine: () Calling .GetVersion
	I1031 18:50:40.909950  504574 main.go:134] libmachine: Using API Version  1
	I1031 18:50:40.909978  504574 main.go:134] libmachine: () Calling .SetConfigRaw
	I1031 18:50:40.910298  504574 main.go:134] libmachine: () Calling .GetMachineName
	I1031 18:50:40.910475  504574 main.go:134] libmachine: (stopped-upgrade-184858) Calling .DriverName
	I1031 18:50:40.910693  504574 main.go:134] libmachine: (stopped-upgrade-184858) Calling .GetState
	I1031 18:50:40.912294  504574 fix.go:103] recreateIfNeeded on stopped-upgrade-184858: state=Stopped err=<nil>
	I1031 18:50:40.912322  504574 main.go:134] libmachine: (stopped-upgrade-184858) Calling .DriverName
	W1031 18:50:40.912534  504574 fix.go:129] unexpected machine state, will restart: <nil>
	I1031 18:50:40.914883  504574 out.go:177] * Restarting existing kvm2 VM for "stopped-upgrade-184858" ...
	I1031 18:50:40.916528  504574 main.go:134] libmachine: (stopped-upgrade-184858) Calling .Start
	I1031 18:50:40.916754  504574 main.go:134] libmachine: (stopped-upgrade-184858) Ensuring networks are active...
	I1031 18:50:40.917615  504574 main.go:134] libmachine: (stopped-upgrade-184858) Ensuring network default is active
	I1031 18:50:40.918027  504574 main.go:134] libmachine: (stopped-upgrade-184858) Ensuring network minikube-net is active
	I1031 18:50:40.918465  504574 main.go:134] libmachine: (stopped-upgrade-184858) Getting domain xml...
	I1031 18:50:40.919341  504574 main.go:134] libmachine: (stopped-upgrade-184858) Creating domain...
	I1031 18:50:42.348814  504574 main.go:134] libmachine: (stopped-upgrade-184858) Waiting to get IP...
	I1031 18:50:42.349792  504574 main.go:134] libmachine: (stopped-upgrade-184858) DBG | domain stopped-upgrade-184858 has defined MAC address 52:54:00:21:ea:43 in network minikube-net
	I1031 18:50:42.350294  504574 main.go:134] libmachine: (stopped-upgrade-184858) DBG | unable to find current IP address of domain stopped-upgrade-184858 in network minikube-net
	I1031 18:50:42.350406  504574 main.go:134] libmachine: (stopped-upgrade-184858) DBG | I1031 18:50:42.350261  504609 retry.go:31] will retry after 263.082536ms: waiting for machine to come up
	I1031 18:50:42.614912  504574 main.go:134] libmachine: (stopped-upgrade-184858) DBG | domain stopped-upgrade-184858 has defined MAC address 52:54:00:21:ea:43 in network minikube-net
	I1031 18:50:42.615601  504574 main.go:134] libmachine: (stopped-upgrade-184858) DBG | unable to find current IP address of domain stopped-upgrade-184858 in network minikube-net
	I1031 18:50:42.615626  504574 main.go:134] libmachine: (stopped-upgrade-184858) DBG | I1031 18:50:42.615543  504609 retry.go:31] will retry after 381.329545ms: waiting for machine to come up
	I1031 18:50:42.998105  504574 main.go:134] libmachine: (stopped-upgrade-184858) DBG | domain stopped-upgrade-184858 has defined MAC address 52:54:00:21:ea:43 in network minikube-net
	I1031 18:50:42.998647  504574 main.go:134] libmachine: (stopped-upgrade-184858) DBG | unable to find current IP address of domain stopped-upgrade-184858 in network minikube-net
	I1031 18:50:42.998679  504574 main.go:134] libmachine: (stopped-upgrade-184858) DBG | I1031 18:50:42.998590  504609 retry.go:31] will retry after 422.765636ms: waiting for machine to come up
	I1031 18:50:43.423256  504574 main.go:134] libmachine: (stopped-upgrade-184858) DBG | domain stopped-upgrade-184858 has defined MAC address 52:54:00:21:ea:43 in network minikube-net
	I1031 18:50:43.423882  504574 main.go:134] libmachine: (stopped-upgrade-184858) DBG | unable to find current IP address of domain stopped-upgrade-184858 in network minikube-net
	I1031 18:50:43.423919  504574 main.go:134] libmachine: (stopped-upgrade-184858) DBG | I1031 18:50:43.423827  504609 retry.go:31] will retry after 473.074753ms: waiting for machine to come up
	I1031 18:50:43.898047  504574 main.go:134] libmachine: (stopped-upgrade-184858) DBG | domain stopped-upgrade-184858 has defined MAC address 52:54:00:21:ea:43 in network minikube-net
	I1031 18:50:43.898585  504574 main.go:134] libmachine: (stopped-upgrade-184858) DBG | unable to find current IP address of domain stopped-upgrade-184858 in network minikube-net
	I1031 18:50:43.898625  504574 main.go:134] libmachine: (stopped-upgrade-184858) DBG | I1031 18:50:43.898512  504609 retry.go:31] will retry after 587.352751ms: waiting for machine to come up
	I1031 18:50:44.487362  504574 main.go:134] libmachine: (stopped-upgrade-184858) DBG | domain stopped-upgrade-184858 has defined MAC address 52:54:00:21:ea:43 in network minikube-net
	I1031 18:50:44.487920  504574 main.go:134] libmachine: (stopped-upgrade-184858) DBG | unable to find current IP address of domain stopped-upgrade-184858 in network minikube-net
	I1031 18:50:44.487958  504574 main.go:134] libmachine: (stopped-upgrade-184858) DBG | I1031 18:50:44.487873  504609 retry.go:31] will retry after 834.206799ms: waiting for machine to come up
	I1031 18:50:45.323631  504574 main.go:134] libmachine: (stopped-upgrade-184858) DBG | domain stopped-upgrade-184858 has defined MAC address 52:54:00:21:ea:43 in network minikube-net
	I1031 18:50:45.336468  504574 main.go:134] libmachine: (stopped-upgrade-184858) DBG | unable to find current IP address of domain stopped-upgrade-184858 in network minikube-net
	I1031 18:50:45.336496  504574 main.go:134] libmachine: (stopped-upgrade-184858) DBG | I1031 18:50:45.336347  504609 retry.go:31] will retry after 746.553905ms: waiting for machine to come up
	I1031 18:50:46.084879  504574 main.go:134] libmachine: (stopped-upgrade-184858) DBG | domain stopped-upgrade-184858 has defined MAC address 52:54:00:21:ea:43 in network minikube-net
	I1031 18:50:46.085315  504574 main.go:134] libmachine: (stopped-upgrade-184858) DBG | unable to find current IP address of domain stopped-upgrade-184858 in network minikube-net
	I1031 18:50:46.085340  504574 main.go:134] libmachine: (stopped-upgrade-184858) DBG | I1031 18:50:46.085296  504609 retry.go:31] will retry after 987.362415ms: waiting for machine to come up
	I1031 18:50:47.074471  504574 main.go:134] libmachine: (stopped-upgrade-184858) DBG | domain stopped-upgrade-184858 has defined MAC address 52:54:00:21:ea:43 in network minikube-net
	I1031 18:50:47.074923  504574 main.go:134] libmachine: (stopped-upgrade-184858) DBG | unable to find current IP address of domain stopped-upgrade-184858 in network minikube-net
	I1031 18:50:47.074998  504574 main.go:134] libmachine: (stopped-upgrade-184858) DBG | I1031 18:50:47.074910  504609 retry.go:31] will retry after 1.189835008s: waiting for machine to come up
	I1031 18:50:48.266289  504574 main.go:134] libmachine: (stopped-upgrade-184858) DBG | domain stopped-upgrade-184858 has defined MAC address 52:54:00:21:ea:43 in network minikube-net
	I1031 18:50:48.266950  504574 main.go:134] libmachine: (stopped-upgrade-184858) DBG | unable to find current IP address of domain stopped-upgrade-184858 in network minikube-net
	I1031 18:50:48.266995  504574 main.go:134] libmachine: (stopped-upgrade-184858) DBG | I1031 18:50:48.266905  504609 retry.go:31] will retry after 1.677229867s: waiting for machine to come up
	I1031 18:50:49.945593  504574 main.go:134] libmachine: (stopped-upgrade-184858) DBG | domain stopped-upgrade-184858 has defined MAC address 52:54:00:21:ea:43 in network minikube-net
	I1031 18:50:49.946163  504574 main.go:134] libmachine: (stopped-upgrade-184858) DBG | unable to find current IP address of domain stopped-upgrade-184858 in network minikube-net
	I1031 18:50:49.946209  504574 main.go:134] libmachine: (stopped-upgrade-184858) DBG | I1031 18:50:49.946100  504609 retry.go:31] will retry after 2.346016261s: waiting for machine to come up
	I1031 18:50:52.294771  504574 main.go:134] libmachine: (stopped-upgrade-184858) DBG | domain stopped-upgrade-184858 has defined MAC address 52:54:00:21:ea:43 in network minikube-net
	I1031 18:50:52.295326  504574 main.go:134] libmachine: (stopped-upgrade-184858) DBG | unable to find current IP address of domain stopped-upgrade-184858 in network minikube-net
	I1031 18:50:52.295362  504574 main.go:134] libmachine: (stopped-upgrade-184858) DBG | I1031 18:50:52.295267  504609 retry.go:31] will retry after 3.36678925s: waiting for machine to come up
	I1031 18:50:55.663572  504574 main.go:134] libmachine: (stopped-upgrade-184858) DBG | domain stopped-upgrade-184858 has defined MAC address 52:54:00:21:ea:43 in network minikube-net
	I1031 18:50:55.663957  504574 main.go:134] libmachine: (stopped-upgrade-184858) DBG | unable to find current IP address of domain stopped-upgrade-184858 in network minikube-net
	I1031 18:50:55.663982  504574 main.go:134] libmachine: (stopped-upgrade-184858) DBG | I1031 18:50:55.663911  504609 retry.go:31] will retry after 3.11822781s: waiting for machine to come up
	I1031 18:50:58.784803  504574 main.go:134] libmachine: (stopped-upgrade-184858) DBG | domain stopped-upgrade-184858 has defined MAC address 52:54:00:21:ea:43 in network minikube-net
	I1031 18:50:58.785284  504574 main.go:134] libmachine: (stopped-upgrade-184858) Found IP for machine: 192.168.39.144
	I1031 18:50:58.785316  504574 main.go:134] libmachine: (stopped-upgrade-184858) Reserving static IP address...
	I1031 18:50:58.785335  504574 main.go:134] libmachine: (stopped-upgrade-184858) DBG | domain stopped-upgrade-184858 has current primary IP address 192.168.39.144 and MAC address 52:54:00:21:ea:43 in network minikube-net
	I1031 18:50:58.785814  504574 main.go:134] libmachine: (stopped-upgrade-184858) DBG | found host DHCP lease matching {name: "stopped-upgrade-184858", mac: "52:54:00:21:ea:43", ip: "192.168.39.144"} in network minikube-net: {Iface:virbr1 ExpiryTime:2022-10-31 19:49:24 +0000 UTC Type:0 Mac:52:54:00:21:ea:43 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:stopped-upgrade-184858 Clientid:01:52:54:00:21:ea:43}
	I1031 18:50:58.785846  504574 main.go:134] libmachine: (stopped-upgrade-184858) Reserved static IP address: 192.168.39.144
	I1031 18:50:58.785871  504574 main.go:134] libmachine: (stopped-upgrade-184858) DBG | skip adding static IP to network minikube-net - found existing host DHCP lease matching {name: "stopped-upgrade-184858", mac: "52:54:00:21:ea:43", ip: "192.168.39.144"}
	I1031 18:50:58.785894  504574 main.go:134] libmachine: (stopped-upgrade-184858) DBG | Getting to WaitForSSH function...
	I1031 18:50:58.785911  504574 main.go:134] libmachine: (stopped-upgrade-184858) Waiting for SSH to be available...
	I1031 18:50:58.788096  504574 main.go:134] libmachine: (stopped-upgrade-184858) DBG | domain stopped-upgrade-184858 has defined MAC address 52:54:00:21:ea:43 in network minikube-net
	I1031 18:50:58.788423  504574 main.go:134] libmachine: (stopped-upgrade-184858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:ea:43", ip: ""} in network minikube-net: {Iface:virbr1 ExpiryTime:2022-10-31 19:49:24 +0000 UTC Type:0 Mac:52:54:00:21:ea:43 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:stopped-upgrade-184858 Clientid:01:52:54:00:21:ea:43}
	I1031 18:50:58.788463  504574 main.go:134] libmachine: (stopped-upgrade-184858) DBG | domain stopped-upgrade-184858 has defined IP address 192.168.39.144 and MAC address 52:54:00:21:ea:43 in network minikube-net
	I1031 18:50:58.788588  504574 main.go:134] libmachine: (stopped-upgrade-184858) DBG | Using SSH client type: external
	I1031 18:50:58.788617  504574 main.go:134] libmachine: (stopped-upgrade-184858) DBG | Using SSH private key: /home/jenkins/minikube-integration/15242-478932/.minikube/machines/stopped-upgrade-184858/id_rsa (-rw-------)
	I1031 18:50:58.788655  504574 main.go:134] libmachine: (stopped-upgrade-184858) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.144 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/15242-478932/.minikube/machines/stopped-upgrade-184858/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1031 18:50:58.788673  504574 main.go:134] libmachine: (stopped-upgrade-184858) DBG | About to run SSH command:
	I1031 18:50:58.788687  504574 main.go:134] libmachine: (stopped-upgrade-184858) DBG | exit 0
	I1031 18:50:58.917801  504574 main.go:134] libmachine: (stopped-upgrade-184858) DBG | SSH cmd err, output: <nil>: 
	I1031 18:50:58.918183  504574 main.go:134] libmachine: (stopped-upgrade-184858) Calling .GetConfigRaw
	I1031 18:50:58.918930  504574 main.go:134] libmachine: (stopped-upgrade-184858) Calling .GetIP
	I1031 18:50:58.921597  504574 main.go:134] libmachine: (stopped-upgrade-184858) DBG | domain stopped-upgrade-184858 has defined MAC address 52:54:00:21:ea:43 in network minikube-net
	I1031 18:50:58.922010  504574 main.go:134] libmachine: (stopped-upgrade-184858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:ea:43", ip: ""} in network minikube-net: {Iface:virbr1 ExpiryTime:2022-10-31 19:49:24 +0000 UTC Type:0 Mac:52:54:00:21:ea:43 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:stopped-upgrade-184858 Clientid:01:52:54:00:21:ea:43}
	I1031 18:50:58.922052  504574 main.go:134] libmachine: (stopped-upgrade-184858) DBG | domain stopped-upgrade-184858 has defined IP address 192.168.39.144 and MAC address 52:54:00:21:ea:43 in network minikube-net
	I1031 18:50:58.922275  504574 profile.go:148] Saving config to /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/stopped-upgrade-184858/config.json ...
	I1031 18:50:58.922446  504574 machine.go:88] provisioning docker machine ...
	I1031 18:50:58.922466  504574 main.go:134] libmachine: (stopped-upgrade-184858) Calling .DriverName
	I1031 18:50:58.922683  504574 main.go:134] libmachine: (stopped-upgrade-184858) Calling .GetMachineName
	I1031 18:50:58.922864  504574 buildroot.go:166] provisioning hostname "stopped-upgrade-184858"
	I1031 18:50:58.922884  504574 main.go:134] libmachine: (stopped-upgrade-184858) Calling .GetMachineName
	I1031 18:50:58.923053  504574 main.go:134] libmachine: (stopped-upgrade-184858) Calling .GetSSHHostname
	I1031 18:50:58.925359  504574 main.go:134] libmachine: (stopped-upgrade-184858) DBG | domain stopped-upgrade-184858 has defined MAC address 52:54:00:21:ea:43 in network minikube-net
	I1031 18:50:58.925730  504574 main.go:134] libmachine: (stopped-upgrade-184858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:ea:43", ip: ""} in network minikube-net: {Iface:virbr1 ExpiryTime:2022-10-31 19:49:24 +0000 UTC Type:0 Mac:52:54:00:21:ea:43 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:stopped-upgrade-184858 Clientid:01:52:54:00:21:ea:43}
	I1031 18:50:58.925754  504574 main.go:134] libmachine: (stopped-upgrade-184858) DBG | domain stopped-upgrade-184858 has defined IP address 192.168.39.144 and MAC address 52:54:00:21:ea:43 in network minikube-net
	I1031 18:50:58.925908  504574 main.go:134] libmachine: (stopped-upgrade-184858) Calling .GetSSHPort
	I1031 18:50:58.926072  504574 main.go:134] libmachine: (stopped-upgrade-184858) Calling .GetSSHKeyPath
	I1031 18:50:58.926204  504574 main.go:134] libmachine: (stopped-upgrade-184858) Calling .GetSSHKeyPath
	I1031 18:50:58.926321  504574 main.go:134] libmachine: (stopped-upgrade-184858) Calling .GetSSHUsername
	I1031 18:50:58.926506  504574 main.go:134] libmachine: Using SSH client type: native
	I1031 18:50:58.926714  504574 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ed4e0] 0x7f0660 <nil>  [] 0s} 192.168.39.144 22 <nil> <nil>}
	I1031 18:50:58.926738  504574 main.go:134] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-184858 && echo "stopped-upgrade-184858" | sudo tee /etc/hostname
	I1031 18:50:59.044931  504574 main.go:134] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-184858
	
	I1031 18:50:59.044972  504574 main.go:134] libmachine: (stopped-upgrade-184858) Calling .GetSSHHostname
	I1031 18:50:59.047757  504574 main.go:134] libmachine: (stopped-upgrade-184858) DBG | domain stopped-upgrade-184858 has defined MAC address 52:54:00:21:ea:43 in network minikube-net
	I1031 18:50:59.048138  504574 main.go:134] libmachine: (stopped-upgrade-184858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:ea:43", ip: ""} in network minikube-net: {Iface:virbr1 ExpiryTime:2022-10-31 19:49:24 +0000 UTC Type:0 Mac:52:54:00:21:ea:43 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:stopped-upgrade-184858 Clientid:01:52:54:00:21:ea:43}
	I1031 18:50:59.048173  504574 main.go:134] libmachine: (stopped-upgrade-184858) DBG | domain stopped-upgrade-184858 has defined IP address 192.168.39.144 and MAC address 52:54:00:21:ea:43 in network minikube-net
	I1031 18:50:59.048353  504574 main.go:134] libmachine: (stopped-upgrade-184858) Calling .GetSSHPort
	I1031 18:50:59.048551  504574 main.go:134] libmachine: (stopped-upgrade-184858) Calling .GetSSHKeyPath
	I1031 18:50:59.048705  504574 main.go:134] libmachine: (stopped-upgrade-184858) Calling .GetSSHKeyPath
	I1031 18:50:59.048851  504574 main.go:134] libmachine: (stopped-upgrade-184858) Calling .GetSSHUsername
	I1031 18:50:59.049030  504574 main.go:134] libmachine: Using SSH client type: native
	I1031 18:50:59.049173  504574 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ed4e0] 0x7f0660 <nil>  [] 0s} 192.168.39.144 22 <nil> <nil>}
	I1031 18:50:59.049190  504574 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-184858' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-184858/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-184858' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1031 18:50:59.167636  504574 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I1031 18:50:59.167671  504574 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/15242-478932/.minikube CaCertPath:/home/jenkins/minikube-integration/15242-478932/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/15242-478932/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/15242-478932/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/15242-478932/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/15242-478932/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/15242-478932/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/15242-478932/.minikube}
	I1031 18:50:59.167710  504574 buildroot.go:174] setting up certificates
	I1031 18:50:59.167720  504574 provision.go:83] configureAuth start
	I1031 18:50:59.167732  504574 main.go:134] libmachine: (stopped-upgrade-184858) Calling .GetMachineName
	I1031 18:50:59.168001  504574 main.go:134] libmachine: (stopped-upgrade-184858) Calling .GetIP
	I1031 18:50:59.170687  504574 main.go:134] libmachine: (stopped-upgrade-184858) DBG | domain stopped-upgrade-184858 has defined MAC address 52:54:00:21:ea:43 in network minikube-net
	I1031 18:50:59.171078  504574 main.go:134] libmachine: (stopped-upgrade-184858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:ea:43", ip: ""} in network minikube-net: {Iface:virbr1 ExpiryTime:2022-10-31 19:49:24 +0000 UTC Type:0 Mac:52:54:00:21:ea:43 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:stopped-upgrade-184858 Clientid:01:52:54:00:21:ea:43}
	I1031 18:50:59.171103  504574 main.go:134] libmachine: (stopped-upgrade-184858) DBG | domain stopped-upgrade-184858 has defined IP address 192.168.39.144 and MAC address 52:54:00:21:ea:43 in network minikube-net
	I1031 18:50:59.171249  504574 main.go:134] libmachine: (stopped-upgrade-184858) Calling .GetSSHHostname
	I1031 18:50:59.173288  504574 main.go:134] libmachine: (stopped-upgrade-184858) DBG | domain stopped-upgrade-184858 has defined MAC address 52:54:00:21:ea:43 in network minikube-net
	I1031 18:50:59.173646  504574 main.go:134] libmachine: (stopped-upgrade-184858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:ea:43", ip: ""} in network minikube-net: {Iface:virbr1 ExpiryTime:2022-10-31 19:49:24 +0000 UTC Type:0 Mac:52:54:00:21:ea:43 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:stopped-upgrade-184858 Clientid:01:52:54:00:21:ea:43}
	I1031 18:50:59.173677  504574 main.go:134] libmachine: (stopped-upgrade-184858) DBG | domain stopped-upgrade-184858 has defined IP address 192.168.39.144 and MAC address 52:54:00:21:ea:43 in network minikube-net
	I1031 18:50:59.173817  504574 provision.go:138] copyHostCerts
	I1031 18:50:59.173870  504574 exec_runner.go:144] found /home/jenkins/minikube-integration/15242-478932/.minikube/ca.pem, removing ...
	I1031 18:50:59.173884  504574 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15242-478932/.minikube/ca.pem
	I1031 18:50:59.173954  504574 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15242-478932/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/15242-478932/.minikube/ca.pem (1078 bytes)
	I1031 18:50:59.174028  504574 exec_runner.go:144] found /home/jenkins/minikube-integration/15242-478932/.minikube/cert.pem, removing ...
	I1031 18:50:59.174036  504574 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15242-478932/.minikube/cert.pem
	I1031 18:50:59.174062  504574 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15242-478932/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/15242-478932/.minikube/cert.pem (1123 bytes)
	I1031 18:50:59.174109  504574 exec_runner.go:144] found /home/jenkins/minikube-integration/15242-478932/.minikube/key.pem, removing ...
	I1031 18:50:59.174117  504574 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15242-478932/.minikube/key.pem
	I1031 18:50:59.174139  504574 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15242-478932/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/15242-478932/.minikube/key.pem (1675 bytes)
	I1031 18:50:59.174178  504574 provision.go:112] generating server cert: /home/jenkins/minikube-integration/15242-478932/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/15242-478932/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/15242-478932/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-184858 san=[192.168.39.144 192.168.39.144 localhost 127.0.0.1 minikube stopped-upgrade-184858]
	I1031 18:50:59.408400  504574 provision.go:172] copyRemoteCerts
	I1031 18:50:59.408473  504574 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1031 18:50:59.408510  504574 main.go:134] libmachine: (stopped-upgrade-184858) Calling .GetSSHHostname
	I1031 18:50:59.411805  504574 main.go:134] libmachine: (stopped-upgrade-184858) DBG | domain stopped-upgrade-184858 has defined MAC address 52:54:00:21:ea:43 in network minikube-net
	I1031 18:50:59.412165  504574 main.go:134] libmachine: (stopped-upgrade-184858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:ea:43", ip: ""} in network minikube-net: {Iface:virbr1 ExpiryTime:2022-10-31 19:49:24 +0000 UTC Type:0 Mac:52:54:00:21:ea:43 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:stopped-upgrade-184858 Clientid:01:52:54:00:21:ea:43}
	I1031 18:50:59.412201  504574 main.go:134] libmachine: (stopped-upgrade-184858) DBG | domain stopped-upgrade-184858 has defined IP address 192.168.39.144 and MAC address 52:54:00:21:ea:43 in network minikube-net
	I1031 18:50:59.412349  504574 main.go:134] libmachine: (stopped-upgrade-184858) Calling .GetSSHPort
	I1031 18:50:59.412558  504574 main.go:134] libmachine: (stopped-upgrade-184858) Calling .GetSSHKeyPath
	I1031 18:50:59.412722  504574 main.go:134] libmachine: (stopped-upgrade-184858) Calling .GetSSHUsername
	I1031 18:50:59.412867  504574 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15242-478932/.minikube/machines/stopped-upgrade-184858/id_rsa Username:docker}
	I1031 18:50:59.497017  504574 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15242-478932/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1031 18:50:59.511225  504574 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15242-478932/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1031 18:50:59.524983  504574 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15242-478932/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1031 18:50:59.537881  504574 provision.go:86] duration metric: configureAuth took 370.150631ms
	I1031 18:50:59.537901  504574 buildroot.go:189] setting minikube options for container-runtime
	I1031 18:50:59.538043  504574 config.go:180] Loaded profile config "stopped-upgrade-184858": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I1031 18:50:59.538055  504574 machine.go:91] provisioned docker machine in 615.596012ms
	I1031 18:50:59.538062  504574 start.go:300] post-start starting for "stopped-upgrade-184858" (driver="kvm2")
	I1031 18:50:59.538069  504574 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1031 18:50:59.538116  504574 main.go:134] libmachine: (stopped-upgrade-184858) Calling .DriverName
	I1031 18:50:59.538361  504574 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1031 18:50:59.538383  504574 main.go:134] libmachine: (stopped-upgrade-184858) Calling .GetSSHHostname
	I1031 18:50:59.540983  504574 main.go:134] libmachine: (stopped-upgrade-184858) DBG | domain stopped-upgrade-184858 has defined MAC address 52:54:00:21:ea:43 in network minikube-net
	I1031 18:50:59.541353  504574 main.go:134] libmachine: (stopped-upgrade-184858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:ea:43", ip: ""} in network minikube-net: {Iface:virbr1 ExpiryTime:2022-10-31 19:49:24 +0000 UTC Type:0 Mac:52:54:00:21:ea:43 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:stopped-upgrade-184858 Clientid:01:52:54:00:21:ea:43}
	I1031 18:50:59.541381  504574 main.go:134] libmachine: (stopped-upgrade-184858) DBG | domain stopped-upgrade-184858 has defined IP address 192.168.39.144 and MAC address 52:54:00:21:ea:43 in network minikube-net
	I1031 18:50:59.541570  504574 main.go:134] libmachine: (stopped-upgrade-184858) Calling .GetSSHPort
	I1031 18:50:59.541786  504574 main.go:134] libmachine: (stopped-upgrade-184858) Calling .GetSSHKeyPath
	I1031 18:50:59.541936  504574 main.go:134] libmachine: (stopped-upgrade-184858) Calling .GetSSHUsername
	I1031 18:50:59.542083  504574 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15242-478932/.minikube/machines/stopped-upgrade-184858/id_rsa Username:docker}
	I1031 18:50:59.626177  504574 ssh_runner.go:195] Run: cat /etc/os-release
	I1031 18:50:59.630109  504574 info.go:137] Remote host: Buildroot 2020.02.8
	I1031 18:50:59.630131  504574 filesync.go:126] Scanning /home/jenkins/minikube-integration/15242-478932/.minikube/addons for local assets ...
	I1031 18:50:59.630190  504574 filesync.go:126] Scanning /home/jenkins/minikube-integration/15242-478932/.minikube/files for local assets ...
	I1031 18:50:59.630255  504574 filesync.go:149] local asset: /home/jenkins/minikube-integration/15242-478932/.minikube/files/etc/ssl/certs/4863142.pem -> 4863142.pem in /etc/ssl/certs
	I1031 18:50:59.630329  504574 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1031 18:50:59.636247  504574 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15242-478932/.minikube/files/etc/ssl/certs/4863142.pem --> /etc/ssl/certs/4863142.pem (1708 bytes)
	I1031 18:50:59.649529  504574 start.go:303] post-start completed in 111.453861ms
	I1031 18:50:59.649548  504574 fix.go:57] fixHost completed within 18.757488618s
	I1031 18:50:59.649566  504574 main.go:134] libmachine: (stopped-upgrade-184858) Calling .GetSSHHostname
	I1031 18:50:59.652256  504574 main.go:134] libmachine: (stopped-upgrade-184858) DBG | domain stopped-upgrade-184858 has defined MAC address 52:54:00:21:ea:43 in network minikube-net
	I1031 18:50:59.652589  504574 main.go:134] libmachine: (stopped-upgrade-184858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:ea:43", ip: ""} in network minikube-net: {Iface:virbr1 ExpiryTime:2022-10-31 19:49:24 +0000 UTC Type:0 Mac:52:54:00:21:ea:43 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:stopped-upgrade-184858 Clientid:01:52:54:00:21:ea:43}
	I1031 18:50:59.652617  504574 main.go:134] libmachine: (stopped-upgrade-184858) DBG | domain stopped-upgrade-184858 has defined IP address 192.168.39.144 and MAC address 52:54:00:21:ea:43 in network minikube-net
	I1031 18:50:59.652795  504574 main.go:134] libmachine: (stopped-upgrade-184858) Calling .GetSSHPort
	I1031 18:50:59.653019  504574 main.go:134] libmachine: (stopped-upgrade-184858) Calling .GetSSHKeyPath
	I1031 18:50:59.653224  504574 main.go:134] libmachine: (stopped-upgrade-184858) Calling .GetSSHKeyPath
	I1031 18:50:59.653385  504574 main.go:134] libmachine: (stopped-upgrade-184858) Calling .GetSSHUsername
	I1031 18:50:59.653565  504574 main.go:134] libmachine: Using SSH client type: native
	I1031 18:50:59.653691  504574 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ed4e0] 0x7f0660 <nil>  [] 0s} 192.168.39.144 22 <nil> <nil>}
	I1031 18:50:59.653703  504574 main.go:134] libmachine: About to run SSH command:
	date +%s.%N
	I1031 18:50:59.766654  504574 main.go:134] libmachine: SSH cmd err, output: <nil>: 1667242259.742637903
	
	I1031 18:50:59.766679  504574 fix.go:207] guest clock: 1667242259.742637903
	I1031 18:50:59.766687  504574 fix.go:220] Guest: 2022-10-31 18:50:59.742637903 +0000 UTC Remote: 2022-10-31 18:50:59.649551757 +0000 UTC m=+18.961072221 (delta=93.086146ms)
	I1031 18:50:59.766706  504574 fix.go:191] guest clock delta is within tolerance: 93.086146ms
	I1031 18:50:59.766712  504574 start.go:83] releasing machines lock for "stopped-upgrade-184858", held for 18.874667402s
	I1031 18:50:59.766766  504574 main.go:134] libmachine: (stopped-upgrade-184858) Calling .DriverName
	I1031 18:50:59.767024  504574 main.go:134] libmachine: (stopped-upgrade-184858) Calling .GetIP
	I1031 18:50:59.769808  504574 main.go:134] libmachine: (stopped-upgrade-184858) DBG | domain stopped-upgrade-184858 has defined MAC address 52:54:00:21:ea:43 in network minikube-net
	I1031 18:50:59.770274  504574 main.go:134] libmachine: (stopped-upgrade-184858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:ea:43", ip: ""} in network minikube-net: {Iface:virbr1 ExpiryTime:2022-10-31 19:49:24 +0000 UTC Type:0 Mac:52:54:00:21:ea:43 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:stopped-upgrade-184858 Clientid:01:52:54:00:21:ea:43}
	I1031 18:50:59.770312  504574 main.go:134] libmachine: (stopped-upgrade-184858) DBG | domain stopped-upgrade-184858 has defined IP address 192.168.39.144 and MAC address 52:54:00:21:ea:43 in network minikube-net
	I1031 18:50:59.770418  504574 main.go:134] libmachine: (stopped-upgrade-184858) Calling .DriverName
	I1031 18:50:59.771049  504574 main.go:134] libmachine: (stopped-upgrade-184858) Calling .DriverName
	I1031 18:50:59.771259  504574 main.go:134] libmachine: (stopped-upgrade-184858) Calling .DriverName
	I1031 18:50:59.771371  504574 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I1031 18:50:59.771428  504574 main.go:134] libmachine: (stopped-upgrade-184858) Calling .GetSSHHostname
	I1031 18:50:59.771545  504574 ssh_runner.go:195] Run: systemctl --version
	I1031 18:50:59.771574  504574 main.go:134] libmachine: (stopped-upgrade-184858) Calling .GetSSHHostname
	I1031 18:50:59.774488  504574 main.go:134] libmachine: (stopped-upgrade-184858) DBG | domain stopped-upgrade-184858 has defined MAC address 52:54:00:21:ea:43 in network minikube-net
	I1031 18:50:59.774677  504574 main.go:134] libmachine: (stopped-upgrade-184858) DBG | domain stopped-upgrade-184858 has defined MAC address 52:54:00:21:ea:43 in network minikube-net
	I1031 18:50:59.774913  504574 main.go:134] libmachine: (stopped-upgrade-184858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:ea:43", ip: ""} in network minikube-net: {Iface:virbr1 ExpiryTime:2022-10-31 19:49:24 +0000 UTC Type:0 Mac:52:54:00:21:ea:43 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:stopped-upgrade-184858 Clientid:01:52:54:00:21:ea:43}
	I1031 18:50:59.774947  504574 main.go:134] libmachine: (stopped-upgrade-184858) DBG | domain stopped-upgrade-184858 has defined IP address 192.168.39.144 and MAC address 52:54:00:21:ea:43 in network minikube-net
	I1031 18:50:59.775065  504574 main.go:134] libmachine: (stopped-upgrade-184858) Calling .GetSSHPort
	I1031 18:50:59.775241  504574 main.go:134] libmachine: (stopped-upgrade-184858) Calling .GetSSHKeyPath
	I1031 18:50:59.775289  504574 main.go:134] libmachine: (stopped-upgrade-184858) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:ea:43", ip: ""} in network minikube-net: {Iface:virbr1 ExpiryTime:2022-10-31 19:49:24 +0000 UTC Type:0 Mac:52:54:00:21:ea:43 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:stopped-upgrade-184858 Clientid:01:52:54:00:21:ea:43}
	I1031 18:50:59.775327  504574 main.go:134] libmachine: (stopped-upgrade-184858) DBG | domain stopped-upgrade-184858 has defined IP address 192.168.39.144 and MAC address 52:54:00:21:ea:43 in network minikube-net
	I1031 18:50:59.775397  504574 main.go:134] libmachine: (stopped-upgrade-184858) Calling .GetSSHUsername
	I1031 18:50:59.775475  504574 main.go:134] libmachine: (stopped-upgrade-184858) Calling .GetSSHPort
	I1031 18:50:59.775567  504574 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15242-478932/.minikube/machines/stopped-upgrade-184858/id_rsa Username:docker}
	I1031 18:50:59.775632  504574 main.go:134] libmachine: (stopped-upgrade-184858) Calling .GetSSHKeyPath
	I1031 18:50:59.775773  504574 main.go:134] libmachine: (stopped-upgrade-184858) Calling .GetSSHUsername
	I1031 18:50:59.775919  504574 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15242-478932/.minikube/machines/stopped-upgrade-184858/id_rsa Username:docker}
	I1031 18:50:59.871308  504574 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I1031 18:50:59.871435  504574 ssh_runner.go:195] Run: sudo crictl images --output json
	I1031 18:51:03.883653  504574 ssh_runner.go:235] Completed: sudo crictl images --output json: (4.012181463s)
	I1031 18:51:03.883795  504574 containerd.go:549] couldn't find preloaded image for "k8s.gcr.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1031 18:51:03.883853  504574 ssh_runner.go:195] Run: which lz4
	I1031 18:51:03.888530  504574 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1031 18:51:03.893483  504574 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/preloaded.tar.lz4': No such file or directory
	I1031 18:51:03.893517  504574 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15242-478932/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (472503869 bytes)
	I1031 18:51:05.487649  504574 containerd.go:496] Took 1.599147 seconds to copy over tarball
	I1031 18:51:05.487730  504574 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1031 18:51:08.903498  504574 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.415733787s)
	I1031 18:51:08.903535  504574 containerd.go:503] Took 3.415854 seconds t extract the tarball
	I1031 18:51:08.903548  504574 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1031 18:51:08.937231  504574 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1031 18:51:09.055066  504574 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1031 18:51:09.097255  504574 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1031 18:51:09.128363  504574 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1031 18:51:09.143425  504574 docker.go:189] disabling docker service ...
	I1031 18:51:09.143507  504574 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1031 18:51:09.156977  504574 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1031 18:51:09.169338  504574 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1031 18:51:09.296270  504574 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1031 18:51:09.429603  504574 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1031 18:51:09.441315  504574 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1031 18:51:09.454616  504574 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*sandbox_image = .*$|sandbox_image = "k8s.gcr.io/pause:3.2"|' -i /etc/containerd/config.toml"
	I1031 18:51:09.461679  504574 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*restrict_oom_score_adj = .*$|restrict_oom_score_adj = false|' -i /etc/containerd/config.toml"
	I1031 18:51:09.468639  504574 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*SystemdCgroup = .*$|SystemdCgroup = false|' -i /etc/containerd/config.toml"
	I1031 18:51:09.475527  504574 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*conf_dir = .*$|conf_dir = "/etc/cni/net.d"|' -i /etc/containerd/config.toml"
	I1031 18:51:09.484690  504574 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1031 18:51:09.492917  504574 crio.go:137] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1031 18:51:09.492995  504574 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1031 18:51:09.505096  504574 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1031 18:51:09.511840  504574 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1031 18:51:09.619653  504574 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1031 18:51:11.857378  504574 ssh_runner.go:235] Completed: sudo systemctl restart containerd: (2.237690412s)
	I1031 18:51:11.857413  504574 start.go:451] Will wait 60s for socket path /run/containerd/containerd.sock
	I1031 18:51:11.857461  504574 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1031 18:51:11.862226  504574 retry.go:31] will retry after 1.104660288s: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/run/containerd/containerd.sock': No such file or directory
	I1031 18:51:12.967728  504574 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1031 18:51:12.972788  504574 start.go:472] Will wait 60s for crictl version
	I1031 18:51:12.972859  504574 ssh_runner.go:195] Run: sudo crictl version
	I1031 18:51:12.990455  504574 retry.go:31] will retry after 14.405090881s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2022-10-31T18:51:12Z" level=fatal msg="getting the runtime version: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	I1031 18:51:27.396710  504574 ssh_runner.go:195] Run: sudo crictl version
	I1031 18:51:27.411675  504574 retry.go:31] will retry after 17.468400798s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2022-10-31T18:51:27Z" level=fatal msg="getting the runtime version: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	I1031 18:51:44.882642  504574 ssh_runner.go:195] Run: sudo crictl version
	I1031 18:51:44.903080  504574 retry.go:31] will retry after 21.098569212s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2022-10-31T18:51:44Z" level=fatal msg="getting the runtime version: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	I1031 18:52:06.002604  504574 ssh_runner.go:195] Run: sudo crictl version
	I1031 18:52:06.025056  504574 out.go:177] 
	W1031 18:52:06.027139  504574 out.go:239] X Exiting due to RUNTIME_ENABLE: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2022-10-31T18:52:06Z" level=fatal msg="getting the runtime version: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	
	X Exiting due to RUNTIME_ENABLE: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2022-10-31T18:52:06Z" level=fatal msg="getting the runtime version: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	
	W1031 18:52:06.027168  504574 out.go:239] * 
	* 
	W1031 18:52:06.028268  504574 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1031 18:52:06.030161  504574 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:207: upgrade from v1.16.0 to HEAD failed: out/minikube-linux-amd64 start -p stopped-upgrade-184858 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: exit status 90
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (186.62s)

                                                
                                    

Test pass (262/296)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 9.16
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.09
10 TestDownloadOnly/v1.25.3/json-events 6.41
11 TestDownloadOnly/v1.25.3/preload-exists 0
15 TestDownloadOnly/v1.25.3/LogsDuration 0.09
16 TestDownloadOnly/DeleteAll 0.48
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.17
19 TestBinaryMirror 0.58
20 TestOffline 96.67
22 TestAddons/Setup 143.14
24 TestAddons/parallel/Registry 16.76
25 TestAddons/parallel/Ingress 27.57
26 TestAddons/parallel/MetricsServer 5.62
27 TestAddons/parallel/HelmTiller 18.94
29 TestAddons/parallel/CSI 49
30 TestAddons/parallel/Headlamp 11.19
31 TestAddons/parallel/CloudSpanner 5.4
33 TestAddons/serial/GCPAuth 41.66
34 TestAddons/StoppedEnableDisable 92.6
35 TestCertOptions 81.56
36 TestCertExpiration 293.52
38 TestForceSystemdFlag 129.37
39 TestForceSystemdEnv 57.63
40 TestKVMDriverInstallOrUpdate 7.02
44 TestErrorSpam/setup 54.12
45 TestErrorSpam/start 0.43
46 TestErrorSpam/status 0.87
47 TestErrorSpam/pause 1.53
48 TestErrorSpam/unpause 1.63
49 TestErrorSpam/stop 1.5
52 TestFunctional/serial/CopySyncFile 0
53 TestFunctional/serial/StartWithProxy 67.1
54 TestFunctional/serial/AuditLog 0
55 TestFunctional/serial/SoftStart 25.18
56 TestFunctional/serial/KubeContext 0.04
57 TestFunctional/serial/KubectlGetPods 0.09
60 TestFunctional/serial/CacheCmd/cache/add_remote 4.8
61 TestFunctional/serial/CacheCmd/cache/add_local 2.39
62 TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 0.07
63 TestFunctional/serial/CacheCmd/cache/list 0.07
64 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.25
65 TestFunctional/serial/CacheCmd/cache/cache_reload 2.16
66 TestFunctional/serial/CacheCmd/cache/delete 0.14
67 TestFunctional/serial/MinikubeKubectlCmd 0.13
68 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
69 TestFunctional/serial/ExtraConfig 31.46
70 TestFunctional/serial/ComponentHealth 0.06
71 TestFunctional/serial/LogsCmd 1.36
72 TestFunctional/serial/LogsFileCmd 1.33
75 TestFunctional/parallel/DashboardCmd 14.33
76 TestFunctional/parallel/DryRun 0.35
77 TestFunctional/parallel/InternationalLanguage 0.17
78 TestFunctional/parallel/StatusCmd 1.25
81 TestFunctional/parallel/ServiceCmd 12.62
82 TestFunctional/parallel/ServiceCmdConnect 8.58
83 TestFunctional/parallel/AddonsCmd 0.18
84 TestFunctional/parallel/PersistentVolumeClaim 46.06
86 TestFunctional/parallel/SSHCmd 0.57
87 TestFunctional/parallel/CpCmd 1.11
88 TestFunctional/parallel/MySQL 30.33
89 TestFunctional/parallel/FileSync 0.24
90 TestFunctional/parallel/CertSync 1.72
94 TestFunctional/parallel/NodeLabels 0.07
96 TestFunctional/parallel/NonActiveRuntimeDisabled 0.6
98 TestFunctional/parallel/License 0.19
99 TestFunctional/parallel/Version/short 0.07
100 TestFunctional/parallel/Version/components 0.71
101 TestFunctional/parallel/ImageCommands/ImageListShort 0.26
102 TestFunctional/parallel/ImageCommands/ImageListTable 0.29
103 TestFunctional/parallel/ImageCommands/ImageListJson 0.3
104 TestFunctional/parallel/ImageCommands/ImageListYaml 0.31
105 TestFunctional/parallel/ImageCommands/ImageBuild 4.68
106 TestFunctional/parallel/ImageCommands/Setup 1.42
107 TestFunctional/parallel/UpdateContextCmd/no_changes 0.13
108 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.12
109 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.12
110 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.11
111 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 4.94
112 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 6.27
121 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.05
122 TestFunctional/parallel/ImageCommands/ImageRemove 0.85
123 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.97
124 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 2.06
125 TestFunctional/parallel/ProfileCmd/profile_not_create 0.43
126 TestFunctional/parallel/MountCmd/any-port 10.99
127 TestFunctional/parallel/ProfileCmd/profile_list 0.37
128 TestFunctional/parallel/ProfileCmd/profile_json_output 0.36
129 TestFunctional/parallel/MountCmd/specific-port 1.8
130 TestFunctional/delete_addon-resizer_images 0.09
131 TestFunctional/delete_my-image_image 0.02
132 TestFunctional/delete_minikube_cached_images 0.02
135 TestIngressAddonLegacy/StartLegacyK8sCluster 75.96
137 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 15.4
138 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.41
139 TestIngressAddonLegacy/serial/ValidateIngressAddons 41.42
142 TestJSONOutput/start/Command 109.57
143 TestJSONOutput/start/Audit 0
145 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
146 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
148 TestJSONOutput/pause/Command 0.64
149 TestJSONOutput/pause/Audit 0
151 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
152 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
154 TestJSONOutput/unpause/Command 0.59
155 TestJSONOutput/unpause/Audit 0
157 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
158 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
160 TestJSONOutput/stop/Command 2.12
161 TestJSONOutput/stop/Audit 0
163 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
164 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
165 TestErrorJSONOutput 0.27
169 TestMainNoArgs 0.07
170 TestMinikubeProfile 111.71
173 TestMountStart/serial/StartWithMountFirst 26.9
174 TestMountStart/serial/VerifyMountFirst 0.42
175 TestMountStart/serial/StartWithMountSecond 26.89
176 TestMountStart/serial/VerifyMountSecond 0.43
177 TestMountStart/serial/DeleteFirst 0.9
178 TestMountStart/serial/VerifyMountPostDelete 0.44
179 TestMountStart/serial/Stop 1.2
180 TestMountStart/serial/RestartStopped 21.88
181 TestMountStart/serial/VerifyMountPostStop 0.44
184 TestMultiNode/serial/FreshStart2Nodes 151.26
185 TestMultiNode/serial/DeployApp2Nodes 5.95
186 TestMultiNode/serial/PingHostFrom2Pods 0.95
187 TestMultiNode/serial/AddNode 62.48
188 TestMultiNode/serial/ProfileList 0.25
189 TestMultiNode/serial/CopyFile 8.24
190 TestMultiNode/serial/StopNode 2.17
191 TestMultiNode/serial/StartAfterStop 62.58
192 TestMultiNode/serial/RestartKeepsNodes 522.07
193 TestMultiNode/serial/DeleteNode 2.12
194 TestMultiNode/serial/StopMultiNode 183.51
195 TestMultiNode/serial/RestartMultiNode 281.01
196 TestMultiNode/serial/ValidateNameConflict 56.01
203 TestScheduledStopUnix 126.21
207 TestRunningBinaryUpgrade 124.1
209 TestKubernetesUpgrade 257.12
213 TestNoKubernetes/serial/StartNoK8sWithVersion 0.12
216 TestNoKubernetes/serial/StartWithK8s 104.31
221 TestNetworkPlugins/group/false 0.32
225 TestNoKubernetes/serial/StartWithStopK8s 44.97
226 TestNoKubernetes/serial/Start 50.16
227 TestNoKubernetes/serial/VerifyK8sNotRunning 0.24
228 TestNoKubernetes/serial/ProfileList 1.1
229 TestNoKubernetes/serial/Stop 1.24
230 TestNoKubernetes/serial/StartNoArgs 40.39
231 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.28
232 TestStoppedBinaryUpgrade/Setup 0.6
235 TestPause/serial/Start 99.32
243 TestNetworkPlugins/group/auto/Start 131
244 TestPause/serial/SecondStartNoReconfiguration 56.78
245 TestPause/serial/Pause 0.7
246 TestPause/serial/VerifyStatus 0.29
247 TestPause/serial/Unpause 0.63
248 TestPause/serial/PauseAgain 0.79
249 TestPause/serial/DeletePaused 1.01
250 TestPause/serial/VerifyDeletedResources 0.47
251 TestNetworkPlugins/group/cilium/Start 105.1
252 TestStoppedBinaryUpgrade/MinikubeLogs 0.74
253 TestNetworkPlugins/group/calico/Start 342.08
254 TestNetworkPlugins/group/custom-flannel/Start 155.69
255 TestNetworkPlugins/group/auto/KubeletFlags 0.25
256 TestNetworkPlugins/group/auto/NetCatPod 11.43
257 TestNetworkPlugins/group/auto/DNS 0.18
258 TestNetworkPlugins/group/auto/Localhost 0.15
259 TestNetworkPlugins/group/auto/HairPin 0.15
260 TestNetworkPlugins/group/kindnet/Start 80.92
261 TestNetworkPlugins/group/cilium/ControllerPod 5.04
262 TestNetworkPlugins/group/cilium/KubeletFlags 0.57
263 TestNetworkPlugins/group/cilium/NetCatPod 13.51
264 TestNetworkPlugins/group/cilium/DNS 0.27
265 TestNetworkPlugins/group/cilium/Localhost 0.17
266 TestNetworkPlugins/group/cilium/HairPin 0.16
267 TestNetworkPlugins/group/flannel/Start 121.88
268 TestNetworkPlugins/group/kindnet/ControllerPod 5.02
269 TestNetworkPlugins/group/kindnet/KubeletFlags 0.26
270 TestNetworkPlugins/group/kindnet/NetCatPod 13.42
271 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.25
272 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.34
273 TestNetworkPlugins/group/kindnet/DNS 0.24
274 TestNetworkPlugins/group/kindnet/Localhost 0.17
275 TestNetworkPlugins/group/kindnet/HairPin 0.16
276 TestNetworkPlugins/group/enable-default-cni/Start 109.5
277 TestNetworkPlugins/group/custom-flannel/DNS 0.19
278 TestNetworkPlugins/group/custom-flannel/Localhost 0.15
279 TestNetworkPlugins/group/custom-flannel/HairPin 0.16
280 TestNetworkPlugins/group/bridge/Start 88.63
281 TestNetworkPlugins/group/flannel/ControllerPod 5.02
282 TestNetworkPlugins/group/flannel/KubeletFlags 0.26
283 TestNetworkPlugins/group/flannel/NetCatPod 13.34
284 TestNetworkPlugins/group/flannel/DNS 0.17
285 TestNetworkPlugins/group/flannel/Localhost 0.13
286 TestNetworkPlugins/group/flannel/HairPin 0.18
288 TestStartStop/group/old-k8s-version/serial/FirstStart 135.12
289 TestNetworkPlugins/group/bridge/KubeletFlags 0.26
290 TestNetworkPlugins/group/bridge/NetCatPod 12.38
291 TestNetworkPlugins/group/bridge/DNS 26.76
292 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.23
293 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.36
294 TestNetworkPlugins/group/enable-default-cni/DNS 0.16
295 TestNetworkPlugins/group/enable-default-cni/Localhost 0.14
296 TestNetworkPlugins/group/enable-default-cni/HairPin 0.15
298 TestStartStop/group/no-preload/serial/FirstStart 132.41
299 TestNetworkPlugins/group/bridge/Localhost 0.2
300 TestNetworkPlugins/group/bridge/HairPin 0.17
302 TestStartStop/group/embed-certs/serial/FirstStart 84.33
303 TestNetworkPlugins/group/calico/ControllerPod 5.33
304 TestNetworkPlugins/group/calico/KubeletFlags 0.43
305 TestNetworkPlugins/group/calico/NetCatPod 13.23
306 TestNetworkPlugins/group/calico/DNS 0.25
307 TestNetworkPlugins/group/calico/Localhost 0.15
308 TestNetworkPlugins/group/calico/HairPin 0.27
310 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 116.56
311 TestStartStop/group/embed-certs/serial/DeployApp 11.44
312 TestStartStop/group/old-k8s-version/serial/DeployApp 9.38
313 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 4.5
314 TestStartStop/group/embed-certs/serial/Stop 102.43
315 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.69
316 TestStartStop/group/old-k8s-version/serial/Stop 92.41
317 TestStartStop/group/no-preload/serial/DeployApp 9.36
318 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.87
319 TestStartStop/group/no-preload/serial/Stop 92.41
320 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.37
321 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.79
322 TestStartStop/group/default-k8s-diff-port/serial/Stop 91.75
323 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.2
324 TestStartStop/group/old-k8s-version/serial/SecondStart 378.95
325 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.22
326 TestStartStop/group/embed-certs/serial/SecondStart 385.55
327 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.23
328 TestStartStop/group/no-preload/serial/SecondStart 330.4
329 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.22
330 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 391.37
331 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 12.02
332 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
333 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.26
334 TestStartStop/group/no-preload/serial/Pause 2.74
336 TestStartStop/group/newest-cni/serial/FirstStart 69.06
337 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.02
338 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.1
339 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.36
340 TestStartStop/group/old-k8s-version/serial/Pause 3.27
341 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 14.03
342 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.09
343 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.41
344 TestStartStop/group/embed-certs/serial/Pause 3.56
345 TestStartStop/group/newest-cni/serial/DeployApp 0
346 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.83
347 TestStartStop/group/newest-cni/serial/Stop 5.14
348 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.22
349 TestStartStop/group/newest-cni/serial/SecondStart 107.85
350 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 21.02
351 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.08
352 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.26
353 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.56
354 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
355 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
356 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.25
357 TestStartStop/group/newest-cni/serial/Pause 2.17
x
+
TestDownloadOnly/v1.16.0/json-events (9.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-180034 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd
aaa_download_only_test.go:71: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-180034 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd: (9.161064276s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (9.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-180034
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-180034: exit status 85 (85.732027ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-180034 | jenkins | v1.27.1 | 31 Oct 22 18:00 UTC |          |
	|         | -p download-only-180034        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/10/31 18:00:34
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.19.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1031 18:00:34.267021  486326 out.go:296] Setting OutFile to fd 1 ...
	I1031 18:00:34.267337  486326 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 18:00:34.267356  486326 out.go:309] Setting ErrFile to fd 2...
	I1031 18:00:34.267365  486326 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 18:00:34.268662  486326 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15242-478932/.minikube/bin
	W1031 18:00:34.268801  486326 root.go:311] Error reading config file at /home/jenkins/minikube-integration/15242-478932/.minikube/config/config.json: open /home/jenkins/minikube-integration/15242-478932/.minikube/config/config.json: no such file or directory
	I1031 18:00:34.269486  486326 out.go:303] Setting JSON to true
	I1031 18:00:34.270574  486326 start.go:116] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6187,"bootTime":1667233047,"procs":437,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1021-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1031 18:00:34.270641  486326 start.go:126] virtualization: kvm guest
	I1031 18:00:34.273770  486326 out.go:97] [download-only-180034] minikube v1.27.1 on Ubuntu 20.04 (kvm/amd64)
	I1031 18:00:34.273878  486326 notify.go:220] Checking for updates...
	W1031 18:00:34.273876  486326 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/15242-478932/.minikube/cache/preloaded-tarball: no such file or directory
	I1031 18:00:34.275631  486326 out.go:169] MINIKUBE_LOCATION=15242
	I1031 18:00:34.277476  486326 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1031 18:00:34.279119  486326 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/15242-478932/kubeconfig
	I1031 18:00:34.280678  486326 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/15242-478932/.minikube
	I1031 18:00:34.282247  486326 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1031 18:00:34.285227  486326 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1031 18:00:34.285408  486326 driver.go:365] Setting default libvirt URI to qemu:///system
	I1031 18:00:34.320430  486326 out.go:97] Using the kvm2 driver based on user configuration
	I1031 18:00:34.320468  486326 start.go:282] selected driver: kvm2
	I1031 18:00:34.320492  486326 start.go:808] validating driver "kvm2" against <nil>
	I1031 18:00:34.320807  486326 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1031 18:00:34.321126  486326 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/15242-478932/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1031 18:00:34.336540  486326 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.27.1
	I1031 18:00:34.336637  486326 start_flags.go:303] no existing cluster config was found, will generate one from the flags 
	I1031 18:00:34.337157  486326 start_flags.go:384] Using suggested 6000MB memory alloc based on sys=32101MB, container=0MB
	I1031 18:00:34.337295  486326 start_flags.go:870] Wait components to verify : map[apiserver:true system_pods:true]
	I1031 18:00:34.337354  486326 cni.go:95] Creating CNI manager for ""
	I1031 18:00:34.337375  486326 cni.go:165] "kvm2" driver + containerd runtime found, recommending bridge
	I1031 18:00:34.337401  486326 start_flags.go:312] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1031 18:00:34.337417  486326 start_flags.go:317] config:
	{Name:download-only-180034 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-180034 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunt
ime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1031 18:00:34.337675  486326 iso.go:124] acquiring lock: {Name:mk75bc6a3e159cb2de2b5f76a06013b9e3e93a7b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1031 18:00:34.340196  486326 out.go:97] Downloading VM boot image ...
	I1031 18:00:34.340247  486326 download.go:101] Downloading: https://storage.googleapis.com/minikube-builds/iso/15159/minikube-v1.27.0-1666206003-15159-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/15159/minikube-v1.27.0-1666206003-15159-amd64.iso.sha256 -> /home/jenkins/minikube-integration/15242-478932/.minikube/cache/iso/amd64/minikube-v1.27.0-1666206003-15159-amd64.iso
	I1031 18:00:36.977217  486326 out.go:97] Starting control plane node download-only-180034 in cluster download-only-180034
	I1031 18:00:36.977252  486326 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I1031 18:00:37.087179  486326 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4
	I1031 18:00:37.087236  486326 cache.go:57] Caching tarball of preloaded images
	I1031 18:00:37.087497  486326 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I1031 18:00:37.089634  486326 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I1031 18:00:37.089663  486326 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4 ...
	I1031 18:00:37.204860  486326 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4?checksum=md5:d96a2b2afa188e17db7ddabb58d563fd -> /home/jenkins/minikube-integration/15242-478932/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-180034"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.25.3/json-events (6.41s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.25.3/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-180034 --force --alsologtostderr --kubernetes-version=v1.25.3 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd
aaa_download_only_test.go:71: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-180034 --force --alsologtostderr --kubernetes-version=v1.25.3 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd: (6.405747978s)
--- PASS: TestDownloadOnly/v1.25.3/json-events (6.41s)

                                                
                                    
x
+
TestDownloadOnly/v1.25.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.25.3/preload-exists
--- PASS: TestDownloadOnly/v1.25.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.25.3/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.25.3/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-180034
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-180034: exit status 85 (86.587911ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-180034 | jenkins | v1.27.1 | 31 Oct 22 18:00 UTC |          |
	|         | -p download-only-180034        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-180034 | jenkins | v1.27.1 | 31 Oct 22 18:00 UTC |          |
	|         | -p download-only-180034        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.25.3   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/10/31 18:00:43
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.19.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1031 18:00:43.515724  486364 out.go:296] Setting OutFile to fd 1 ...
	I1031 18:00:43.516162  486364 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 18:00:43.516179  486364 out.go:309] Setting ErrFile to fd 2...
	I1031 18:00:43.516187  486364 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 18:00:43.516474  486364 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15242-478932/.minikube/bin
	W1031 18:00:43.516792  486364 root.go:311] Error reading config file at /home/jenkins/minikube-integration/15242-478932/.minikube/config/config.json: open /home/jenkins/minikube-integration/15242-478932/.minikube/config/config.json: no such file or directory
	I1031 18:00:43.517661  486364 out.go:303] Setting JSON to true
	I1031 18:00:43.518744  486364 start.go:116] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6196,"bootTime":1667233047,"procs":433,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1021-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1031 18:00:43.518812  486364 start.go:126] virtualization: kvm guest
	I1031 18:00:43.520791  486364 out.go:97] [download-only-180034] minikube v1.27.1 on Ubuntu 20.04 (kvm/amd64)
	I1031 18:00:43.520875  486364 notify.go:220] Checking for updates...
	I1031 18:00:43.522379  486364 out.go:169] MINIKUBE_LOCATION=15242
	I1031 18:00:43.524015  486364 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1031 18:00:43.525756  486364 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/15242-478932/kubeconfig
	I1031 18:00:43.527327  486364 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/15242-478932/.minikube
	I1031 18:00:43.529140  486364 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1031 18:00:43.532218  486364 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1031 18:00:43.532611  486364 config.go:180] Loaded profile config "download-only-180034": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	W1031 18:00:43.532673  486364 start.go:716] api.Load failed for download-only-180034: filestore "download-only-180034": Docker machine "download-only-180034" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1031 18:00:43.532720  486364 driver.go:365] Setting default libvirt URI to qemu:///system
	W1031 18:00:43.532752  486364 start.go:716] api.Load failed for download-only-180034: filestore "download-only-180034": Docker machine "download-only-180034" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1031 18:00:43.564754  486364 out.go:97] Using the kvm2 driver based on existing profile
	I1031 18:00:43.564773  486364 start.go:282] selected driver: kvm2
	I1031 18:00:43.564783  486364 start.go:808] validating driver "kvm2" against &{Name:download-only-180034 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15159/minikube-v1.27.0-1666206003-15159-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.16.0 ClusterName:download-only-180034 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1031 18:00:43.565119  486364 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1031 18:00:43.565278  486364 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/15242-478932/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1031 18:00:43.580436  486364 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.27.1
	I1031 18:00:43.581118  486364 cni.go:95] Creating CNI manager for ""
	I1031 18:00:43.581135  486364 cni.go:165] "kvm2" driver + containerd runtime found, recommending bridge
	I1031 18:00:43.581150  486364 start_flags.go:317] config:
	{Name:download-only-180034 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15159/minikube-v1.27.0-1666206003-15159-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:download-only-180034 Namespace:defa
ult APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwar
ePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1031 18:00:43.581266  486364 iso.go:124] acquiring lock: {Name:mk75bc6a3e159cb2de2b5f76a06013b9e3e93a7b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1031 18:00:43.583276  486364 out.go:97] Starting control plane node download-only-180034 in cluster download-only-180034
	I1031 18:00:43.583291  486364 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime containerd
	I1031 18:00:43.693846  486364 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.25.3/preloaded-images-k8s-v18-v1.25.3-containerd-overlay2-amd64.tar.lz4
	I1031 18:00:43.693895  486364 cache.go:57] Caching tarball of preloaded images
	I1031 18:00:43.694138  486364 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime containerd
	I1031 18:00:43.696363  486364 out.go:97] Downloading Kubernetes v1.25.3 preload ...
	I1031 18:00:43.696381  486364 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.25.3-containerd-overlay2-amd64.tar.lz4 ...
	I1031 18:00:43.806772  486364 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.25.3/preloaded-images-k8s-v18-v1.25.3-containerd-overlay2-amd64.tar.lz4?checksum=md5:60f9fee056da17edf086af60afca6341 -> /home/jenkins/minikube-integration/15242-478932/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-containerd-overlay2-amd64.tar.lz4
	I1031 18:00:48.006499  486364 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.25.3-containerd-overlay2-amd64.tar.lz4 ...
	I1031 18:00:48.006612  486364 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/15242-478932/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-containerd-overlay2-amd64.tar.lz4 ...
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-180034"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.25.3/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.48s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:191: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.48s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.17s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:203: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-180034
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.17s)

                                                
                                    
x
+
TestBinaryMirror (0.58s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:310: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-180050 --alsologtostderr --binary-mirror http://127.0.0.1:41577 --driver=kvm2  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-180050" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-180050
--- PASS: TestBinaryMirror (0.58s)

                                                
                                    
x
+
TestOffline (96.67s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-containerd-184454 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=containerd

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-containerd-184454 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=containerd: (1m35.454130852s)
helpers_test.go:175: Cleaning up "offline-containerd-184454" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-containerd-184454
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-containerd-184454: (1.214388511s)
--- PASS: TestOffline (96.67s)

                                                
                                    
x
+
TestAddons/Setup (143.14s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:76: (dbg) Run:  out/minikube-linux-amd64 start -p addons-180051 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --driver=kvm2  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:76: (dbg) Done: out/minikube-linux-amd64 start -p addons-180051 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --driver=kvm2  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m23.141762964s)
--- PASS: TestAddons/Setup (143.14s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.76s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:283: registry stabilized in 17.747826ms
addons_test.go:285: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/Registry
helpers_test.go:342: "registry-nxgmk" [33408e93-7b6e-4b05-8cad-28fa8d98c7d7] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:285: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.014253045s
addons_test.go:288: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/Registry
helpers_test.go:342: "registry-proxy-pdg48" [496f4285-30e1-433b-b7f2-e3a1759ea3d0] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:288: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.012533459s
addons_test.go:293: (dbg) Run:  kubectl --context addons-180051 delete po -l run=registry-test --now
addons_test.go:298: (dbg) Run:  kubectl --context addons-180051 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:298: (dbg) Done: kubectl --context addons-180051 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (6.039140209s)
addons_test.go:312: (dbg) Run:  out/minikube-linux-amd64 -p addons-180051 ip
2022/10/31 18:03:30 [DEBUG] GET http://192.168.39.63:5000
addons_test.go:341: (dbg) Run:  out/minikube-linux-amd64 -p addons-180051 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.76s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (27.57s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:165: (dbg) Run:  kubectl --context addons-180051 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:185: (dbg) Run:  kubectl --context addons-180051 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:198: (dbg) Run:  kubectl --context addons-180051 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:203: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:342: "nginx" [5d5c2354-c8f0-4830-8b62-5614ef31cbad] Pending
helpers_test.go:342: "nginx" [5d5c2354-c8f0-4830-8b62-5614ef31cbad] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
helpers_test.go:342: "nginx" [5d5c2354-c8f0-4830-8b62-5614ef31cbad] Running

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:203: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 16.013059589s
addons_test.go:215: (dbg) Run:  out/minikube-linux-amd64 -p addons-180051 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:239: (dbg) Run:  kubectl --context addons-180051 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p addons-180051 ip

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:250: (dbg) Run:  nslookup hello-john.test 192.168.39.63
addons_test.go:259: (dbg) Run:  out/minikube-linux-amd64 -p addons-180051 addons disable ingress-dns --alsologtostderr -v=1

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:259: (dbg) Done: out/minikube-linux-amd64 -p addons-180051 addons disable ingress-dns --alsologtostderr -v=1: (1.843690701s)
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-180051 addons disable ingress --alsologtostderr -v=1

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:264: (dbg) Done: out/minikube-linux-amd64 -p addons-180051 addons disable ingress --alsologtostderr -v=1: (7.584029531s)
--- PASS: TestAddons/parallel/Ingress (27.57s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.62s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:360: metrics-server stabilized in 2.863272ms
addons_test.go:362: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:342: "metrics-server-769cd898cd-r9g4d" [556f457f-e06c-4dbd-abf6-5fadd60f870c] Running

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:362: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.009996264s
addons_test.go:368: (dbg) Run:  kubectl --context addons-180051 top pods -n kube-system
addons_test.go:385: (dbg) Run:  out/minikube-linux-amd64 -p addons-180051 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.62s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (18.94s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:409: tiller-deploy stabilized in 6.663913ms
addons_test.go:411: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:342: "tiller-deploy-696b5bfbb7-gkrqk" [b3029073-c891-4eac-8bc9-ba92b35f40a4] Running

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:411: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.025633424s
addons_test.go:426: (dbg) Run:  kubectl --context addons-180051 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:426: (dbg) Done: kubectl --context addons-180051 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version: (13.082637187s)
addons_test.go:443: (dbg) Run:  out/minikube-linux-amd64 -p addons-180051 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (18.94s)

                                                
                                    
x
+
TestAddons/parallel/CSI (49s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:514: csi-hostpath-driver pods stabilized in 26.324107ms
addons_test.go:517: (dbg) Run:  kubectl --context addons-180051 create -f testdata/csi-hostpath-driver/pvc.yaml

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:522: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:392: (dbg) Run:  kubectl --context addons-180051 get pvc hpvc -o jsonpath={.status.phase} -n default

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:392: (dbg) Run:  kubectl --context addons-180051 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:527: (dbg) Run:  kubectl --context addons-180051 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:532: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:342: "task-pv-pod" [c79ca72f-32f4-49db-aa5b-9ccafee25853] Pending

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:342: "task-pv-pod" [c79ca72f-32f4-49db-aa5b-9ccafee25853] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:342: "task-pv-pod" [c79ca72f-32f4-49db-aa5b-9ccafee25853] Running

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:532: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 24.007107283s
addons_test.go:537: (dbg) Run:  kubectl --context addons-180051 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:542: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:417: (dbg) Run:  kubectl --context addons-180051 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:425: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:417: (dbg) Run:  kubectl --context addons-180051 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:547: (dbg) Run:  kubectl --context addons-180051 delete pod task-pv-pod
addons_test.go:553: (dbg) Run:  kubectl --context addons-180051 delete pvc hpvc
addons_test.go:559: (dbg) Run:  kubectl --context addons-180051 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:564: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:392: (dbg) Run:  kubectl --context addons-180051 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:569: (dbg) Run:  kubectl --context addons-180051 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:574: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:342: "task-pv-pod-restore" [afc1b9c4-1bc1-45c1-b7ed-e5a1716fb6e5] Pending

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:342: "task-pv-pod-restore" [afc1b9c4-1bc1-45c1-b7ed-e5a1716fb6e5] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:342: "task-pv-pod-restore" [afc1b9c4-1bc1-45c1-b7ed-e5a1716fb6e5] Running

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:574: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 12.007735044s
addons_test.go:579: (dbg) Run:  kubectl --context addons-180051 delete pod task-pv-pod-restore
addons_test.go:583: (dbg) Run:  kubectl --context addons-180051 delete pvc hpvc-restore
addons_test.go:587: (dbg) Run:  kubectl --context addons-180051 delete volumesnapshot new-snapshot-demo
addons_test.go:591: (dbg) Run:  out/minikube-linux-amd64 -p addons-180051 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:591: (dbg) Done: out/minikube-linux-amd64 -p addons-180051 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.861529125s)
addons_test.go:595: (dbg) Run:  out/minikube-linux-amd64 -p addons-180051 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (49.00s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (11.19s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:738: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-180051 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:738: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-180051 --alsologtostderr -v=1: (1.181339711s)
addons_test.go:743: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
helpers_test.go:342: "headlamp-5f4cf474d8-z5xf6" [b342e139-6a3d-4e24-8553-60127375f814] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
helpers_test.go:342: "headlamp-5f4cf474d8-z5xf6" [b342e139-6a3d-4e24-8553-60127375f814] Running

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:743: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.008366909s
--- PASS: TestAddons/parallel/Headlamp (11.19s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.4s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:759: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
helpers_test.go:342: "cloud-spanner-emulator-6c47ff8fb6-2s6th" [350a20d4-1233-46d8-ac75-598f9e51d3ab] Running

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:759: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.016925724s
addons_test.go:762: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-180051
--- PASS: TestAddons/parallel/CloudSpanner (5.40s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth (41.66s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth
addons_test.go:606: (dbg) Run:  kubectl --context addons-180051 create -f testdata/busybox.yaml
addons_test.go:613: (dbg) Run:  kubectl --context addons-180051 create sa gcp-auth-test
addons_test.go:619: (dbg) TestAddons/serial/GCPAuth: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [6197b29e-48ee-40fe-8826-3046f2eca9cc] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [6197b29e-48ee-40fe-8826-3046f2eca9cc] Running
addons_test.go:619: (dbg) TestAddons/serial/GCPAuth: integration-test=busybox healthy within 9.008656383s
addons_test.go:625: (dbg) Run:  kubectl --context addons-180051 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:637: (dbg) Run:  kubectl --context addons-180051 describe sa gcp-auth-test
addons_test.go:675: (dbg) Run:  kubectl --context addons-180051 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
addons_test.go:688: (dbg) Run:  out/minikube-linux-amd64 -p addons-180051 addons disable gcp-auth --alsologtostderr -v=1
addons_test.go:688: (dbg) Done: out/minikube-linux-amd64 -p addons-180051 addons disable gcp-auth --alsologtostderr -v=1: (6.061394269s)
addons_test.go:704: (dbg) Run:  out/minikube-linux-amd64 -p addons-180051 addons enable gcp-auth
addons_test.go:704: (dbg) Done: out/minikube-linux-amd64 -p addons-180051 addons enable gcp-auth: (2.041257285s)
addons_test.go:710: (dbg) Run:  kubectl --context addons-180051 apply -f testdata/private-image.yaml
addons_test.go:717: (dbg) TestAddons/serial/GCPAuth: waiting 8m0s for pods matching "integration-test=private-image" in namespace "default" ...
helpers_test.go:342: "private-image-5c86c669bd-sswfs" [2e96bd0b-3355-4284-8f52-7334dd1b018d] Pending / Ready:ContainersNotReady (containers with unready status: [private-image]) / ContainersReady:ContainersNotReady (containers with unready status: [private-image])
helpers_test.go:342: "private-image-5c86c669bd-sswfs" [2e96bd0b-3355-4284-8f52-7334dd1b018d] Running
addons_test.go:717: (dbg) TestAddons/serial/GCPAuth: integration-test=private-image healthy within 15.010164745s
addons_test.go:723: (dbg) Run:  kubectl --context addons-180051 apply -f testdata/private-image-eu.yaml
addons_test.go:728: (dbg) TestAddons/serial/GCPAuth: waiting 8m0s for pods matching "integration-test=private-image-eu" in namespace "default" ...
helpers_test.go:342: "private-image-eu-64c96f687b-kh27z" [1db685d7-2222-497d-861b-fc2b236c0b76] Pending / Ready:ContainersNotReady (containers with unready status: [private-image-eu]) / ContainersReady:ContainersNotReady (containers with unready status: [private-image-eu])
helpers_test.go:342: "private-image-eu-64c96f687b-kh27z" [1db685d7-2222-497d-861b-fc2b236c0b76] Running
addons_test.go:728: (dbg) TestAddons/serial/GCPAuth: integration-test=private-image-eu healthy within 8.014922698s
--- PASS: TestAddons/serial/GCPAuth (41.66s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (92.6s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:135: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-180051
addons_test.go:135: (dbg) Done: out/minikube-linux-amd64 stop -p addons-180051: (1m32.381084236s)
addons_test.go:139: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-180051
addons_test.go:143: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-180051
--- PASS: TestAddons/StoppedEnableDisable (92.60s)

                                                
                                    
x
+
TestCertOptions (81.56s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-184631 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=containerd

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-184631 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=containerd: (1m19.429778725s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-184631 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-184631 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-184631 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-184631" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-184631
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-184631: (1.610389246s)
--- PASS: TestCertOptions (81.56s)

                                                
                                    
x
+
TestCertExpiration (293.52s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-184552 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=containerd

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-184552 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=containerd: (1m34.562318467s)
E1031 18:47:39.134712  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/functional-180719/client.crt: no such file or directory

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-184552 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=containerd

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-184552 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=containerd: (17.857228991s)
helpers_test.go:175: Cleaning up "cert-expiration-184552" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-184552
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-184552: (1.097103337s)
--- PASS: TestCertExpiration (293.52s)

                                                
                                    
x
+
TestForceSystemdFlag (129.37s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-184455 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-184455 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (2m7.407476239s)
docker_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-184455 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-184455" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-184455
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-184455: (1.673608733s)
--- PASS: TestForceSystemdFlag (129.37s)

                                                
                                    
x
+
TestForceSystemdEnv (57.63s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:149: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-184454 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:149: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-184454 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (56.201003346s)
docker_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-184454 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-184454" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-184454
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-184454: (1.179472649s)
--- PASS: TestForceSystemdEnv (57.63s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (7.02s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (7.02s)

                                                
                                    
x
+
TestErrorSpam/setup (54.12s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-180618 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-180618 --driver=kvm2  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-180618 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-180618 --driver=kvm2  --container-runtime=containerd: (54.117221421s)
--- PASS: TestErrorSpam/setup (54.12s)

                                                
                                    
x
+
TestErrorSpam/start (0.43s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-180618 --log_dir /tmp/nospam-180618 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-180618 --log_dir /tmp/nospam-180618 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-180618 --log_dir /tmp/nospam-180618 start --dry-run
--- PASS: TestErrorSpam/start (0.43s)

                                                
                                    
x
+
TestErrorSpam/status (0.87s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-180618 --log_dir /tmp/nospam-180618 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-180618 --log_dir /tmp/nospam-180618 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-180618 --log_dir /tmp/nospam-180618 status
--- PASS: TestErrorSpam/status (0.87s)

                                                
                                    
x
+
TestErrorSpam/pause (1.53s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-180618 --log_dir /tmp/nospam-180618 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-180618 --log_dir /tmp/nospam-180618 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-180618 --log_dir /tmp/nospam-180618 pause
--- PASS: TestErrorSpam/pause (1.53s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.63s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-180618 --log_dir /tmp/nospam-180618 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-180618 --log_dir /tmp/nospam-180618 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-180618 --log_dir /tmp/nospam-180618 unpause
--- PASS: TestErrorSpam/unpause (1.63s)

                                                
                                    
x
+
TestErrorSpam/stop (1.5s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-180618 --log_dir /tmp/nospam-180618 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-180618 --log_dir /tmp/nospam-180618 stop: (1.303948639s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-180618 --log_dir /tmp/nospam-180618 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-180618 --log_dir /tmp/nospam-180618 stop
--- PASS: TestErrorSpam/stop (1.50s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1782: local sync path: /home/jenkins/minikube-integration/15242-478932/.minikube/files/etc/test/nested/copy/486314/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (67.1s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2161: (dbg) Run:  out/minikube-linux-amd64 start -p functional-180719 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=containerd
E1031 18:08:14.492921  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/addons-180051/client.crt: no such file or directory
E1031 18:08:14.498768  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/addons-180051/client.crt: no such file or directory
E1031 18:08:14.508991  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/addons-180051/client.crt: no such file or directory
E1031 18:08:14.529232  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/addons-180051/client.crt: no such file or directory
E1031 18:08:14.569514  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/addons-180051/client.crt: no such file or directory
E1031 18:08:14.649852  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/addons-180051/client.crt: no such file or directory
E1031 18:08:14.810265  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/addons-180051/client.crt: no such file or directory
E1031 18:08:15.130845  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/addons-180051/client.crt: no such file or directory
E1031 18:08:15.771804  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/addons-180051/client.crt: no such file or directory
E1031 18:08:17.052044  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/addons-180051/client.crt: no such file or directory
E1031 18:08:19.613159  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/addons-180051/client.crt: no such file or directory
E1031 18:08:24.734257  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/addons-180051/client.crt: no such file or directory
functional_test.go:2161: (dbg) Done: out/minikube-linux-amd64 start -p functional-180719 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=containerd: (1m7.099171142s)
--- PASS: TestFunctional/serial/StartWithProxy (67.10s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (25.18s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:652: (dbg) Run:  out/minikube-linux-amd64 start -p functional-180719 --alsologtostderr -v=8
E1031 18:08:34.974885  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/addons-180051/client.crt: no such file or directory
functional_test.go:652: (dbg) Done: out/minikube-linux-amd64 start -p functional-180719 --alsologtostderr -v=8: (25.179503131s)
functional_test.go:656: soft start took 25.180091467s for "functional-180719" cluster.
--- PASS: TestFunctional/serial/SoftStart (25.18s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:674: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:689: (dbg) Run:  kubectl --context functional-180719 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.8s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1042: (dbg) Run:  out/minikube-linux-amd64 -p functional-180719 cache add k8s.gcr.io/pause:3.1
functional_test.go:1042: (dbg) Done: out/minikube-linux-amd64 -p functional-180719 cache add k8s.gcr.io/pause:3.1: (1.743244473s)
functional_test.go:1042: (dbg) Run:  out/minikube-linux-amd64 -p functional-180719 cache add k8s.gcr.io/pause:3.3
functional_test.go:1042: (dbg) Done: out/minikube-linux-amd64 -p functional-180719 cache add k8s.gcr.io/pause:3.3: (1.763483861s)
functional_test.go:1042: (dbg) Run:  out/minikube-linux-amd64 -p functional-180719 cache add k8s.gcr.io/pause:latest
E1031 18:08:55.455179  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/addons-180051/client.crt: no such file or directory
functional_test.go:1042: (dbg) Done: out/minikube-linux-amd64 -p functional-180719 cache add k8s.gcr.io/pause:latest: (1.288405401s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.80s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.39s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1070: (dbg) Run:  docker build -t minikube-local-cache-test:functional-180719 /tmp/TestFunctionalserialCacheCmdcacheadd_local3185595201/001
functional_test.go:1082: (dbg) Run:  out/minikube-linux-amd64 -p functional-180719 cache add minikube-local-cache-test:functional-180719
functional_test.go:1082: (dbg) Done: out/minikube-linux-amd64 -p functional-180719 cache add minikube-local-cache-test:functional-180719: (2.150297297s)
functional_test.go:1087: (dbg) Run:  out/minikube-linux-amd64 -p functional-180719 cache delete minikube-local-cache-test:functional-180719
functional_test.go:1076: (dbg) Run:  docker rmi minikube-local-cache-test:functional-180719
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.39s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3
functional_test.go:1095: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1103: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.25s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 -p functional-180719 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.25s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.16s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1140: (dbg) Run:  out/minikube-linux-amd64 -p functional-180719 ssh sudo crictl rmi k8s.gcr.io/pause:latest
functional_test.go:1146: (dbg) Run:  out/minikube-linux-amd64 -p functional-180719 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
functional_test.go:1146: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-180719 ssh sudo crictl inspecti k8s.gcr.io/pause:latest: exit status 1 (235.285367ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "k8s.gcr.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1151: (dbg) Run:  out/minikube-linux-amd64 -p functional-180719 cache reload
functional_test.go:1151: (dbg) Done: out/minikube-linux-amd64 -p functional-180719 cache reload: (1.410116114s)
functional_test.go:1156: (dbg) Run:  out/minikube-linux-amd64 -p functional-180719 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.16s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1165: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:3.1
functional_test.go:1165: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:709: (dbg) Run:  out/minikube-linux-amd64 -p functional-180719 kubectl -- --context functional-180719 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:734: (dbg) Run:  out/kubectl --context functional-180719 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (31.46s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:750: (dbg) Run:  out/minikube-linux-amd64 start -p functional-180719 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:750: (dbg) Done: out/minikube-linux-amd64 start -p functional-180719 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (31.455570246s)
functional_test.go:754: restart took 31.455696531s for "functional-180719" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (31.46s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:803: (dbg) Run:  kubectl --context functional-180719 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:818: etcd phase: Running
functional_test.go:828: etcd status: Ready
functional_test.go:818: kube-apiserver phase: Running
functional_test.go:828: kube-apiserver status: Ready
functional_test.go:818: kube-controller-manager phase: Running
functional_test.go:828: kube-controller-manager status: Ready
functional_test.go:818: kube-scheduler phase: Running
functional_test.go:828: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.36s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1229: (dbg) Run:  out/minikube-linux-amd64 -p functional-180719 logs
functional_test.go:1229: (dbg) Done: out/minikube-linux-amd64 -p functional-180719 logs: (1.363841478s)
--- PASS: TestFunctional/serial/LogsCmd (1.36s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.33s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1243: (dbg) Run:  out/minikube-linux-amd64 -p functional-180719 logs --file /tmp/TestFunctionalserialLogsFileCmd1686992979/001/logs.txt
functional_test.go:1243: (dbg) Done: out/minikube-linux-amd64 -p functional-180719 logs --file /tmp/TestFunctionalserialLogsFileCmd1686992979/001/logs.txt: (1.332718359s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.33s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (14.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:898: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-180719 --alsologtostderr -v=1]

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:903: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-180719 --alsologtostderr -v=1] ...
helpers_test.go:506: unable to kill pid 491318: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (14.33s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:967: (dbg) Run:  out/minikube-linux-amd64 start -p functional-180719 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:967: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-180719 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd: exit status 23 (178.654108ms)

                                                
                                                
-- stdout --
	* [functional-180719] minikube v1.27.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15242
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15242-478932/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15242-478932/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1031 18:10:00.332136  491096 out.go:296] Setting OutFile to fd 1 ...
	I1031 18:10:00.332250  491096 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 18:10:00.332261  491096 out.go:309] Setting ErrFile to fd 2...
	I1031 18:10:00.332265  491096 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 18:10:00.332384  491096 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15242-478932/.minikube/bin
	I1031 18:10:00.332935  491096 out.go:303] Setting JSON to false
	I1031 18:10:00.333821  491096 start.go:116] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6753,"bootTime":1667233047,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1021-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1031 18:10:00.333881  491096 start.go:126] virtualization: kvm guest
	I1031 18:10:00.336666  491096 out.go:177] * [functional-180719] minikube v1.27.1 on Ubuntu 20.04 (kvm/amd64)
	I1031 18:10:00.338618  491096 out.go:177]   - MINIKUBE_LOCATION=15242
	I1031 18:10:00.338558  491096 notify.go:220] Checking for updates...
	I1031 18:10:00.340430  491096 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1031 18:10:00.342100  491096 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/15242-478932/kubeconfig
	I1031 18:10:00.343647  491096 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/15242-478932/.minikube
	I1031 18:10:00.345370  491096 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1031 18:10:00.347390  491096 config.go:180] Loaded profile config "functional-180719": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.25.3
	I1031 18:10:00.347963  491096 main.go:134] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1031 18:10:00.348021  491096 main.go:134] libmachine: Launching plugin server for driver kvm2
	I1031 18:10:00.364746  491096 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:46831
	I1031 18:10:00.365200  491096 main.go:134] libmachine: () Calling .GetVersion
	I1031 18:10:00.365860  491096 main.go:134] libmachine: Using API Version  1
	I1031 18:10:00.365890  491096 main.go:134] libmachine: () Calling .SetConfigRaw
	I1031 18:10:00.366211  491096 main.go:134] libmachine: () Calling .GetMachineName
	I1031 18:10:00.366408  491096 main.go:134] libmachine: (functional-180719) Calling .DriverName
	I1031 18:10:00.366632  491096 driver.go:365] Setting default libvirt URI to qemu:///system
	I1031 18:10:00.366974  491096 main.go:134] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1031 18:10:00.367038  491096 main.go:134] libmachine: Launching plugin server for driver kvm2
	I1031 18:10:00.383724  491096 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:33319
	I1031 18:10:00.384134  491096 main.go:134] libmachine: () Calling .GetVersion
	I1031 18:10:00.384673  491096 main.go:134] libmachine: Using API Version  1
	I1031 18:10:00.384715  491096 main.go:134] libmachine: () Calling .SetConfigRaw
	I1031 18:10:00.385155  491096 main.go:134] libmachine: () Calling .GetMachineName
	I1031 18:10:00.385382  491096 main.go:134] libmachine: (functional-180719) Calling .DriverName
	I1031 18:10:00.420534  491096 out.go:177] * Using the kvm2 driver based on existing profile
	I1031 18:10:00.422164  491096 start.go:282] selected driver: kvm2
	I1031 18:10:00.422192  491096 start.go:808] validating driver "kvm2" against &{Name:functional-180719 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15159/minikube-v1.27.0-1666206003-15159-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.25.3 ClusterName:functional-180719 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.39.141 Port:8441 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics
-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1031 18:10:00.422315  491096 start.go:819] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1031 18:10:00.424685  491096 out.go:177] 
	W1031 18:10:00.426151  491096 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1031 18:10:00.427690  491096 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:984: (dbg) Run:  out/minikube-linux-amd64 start -p functional-180719 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1013: (dbg) Run:  out/minikube-linux-amd64 start -p functional-180719 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1013: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-180719 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd: exit status 23 (164.730248ms)

                                                
                                                
-- stdout --
	* [functional-180719] minikube v1.27.1 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15242
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15242-478932/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15242-478932/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1031 18:10:00.676059  491197 out.go:296] Setting OutFile to fd 1 ...
	I1031 18:10:00.676263  491197 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 18:10:00.676274  491197 out.go:309] Setting ErrFile to fd 2...
	I1031 18:10:00.676283  491197 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 18:10:00.676457  491197 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15242-478932/.minikube/bin
	I1031 18:10:00.676990  491197 out.go:303] Setting JSON to false
	I1031 18:10:00.677959  491197 start.go:116] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6754,"bootTime":1667233047,"procs":222,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1021-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1031 18:10:00.678036  491197 start.go:126] virtualization: kvm guest
	I1031 18:10:00.680679  491197 out.go:177] * [functional-180719] minikube v1.27.1 sur Ubuntu 20.04 (kvm/amd64)
	I1031 18:10:00.682418  491197 out.go:177]   - MINIKUBE_LOCATION=15242
	I1031 18:10:00.682384  491197 notify.go:220] Checking for updates...
	I1031 18:10:00.684035  491197 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1031 18:10:00.685817  491197 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/15242-478932/kubeconfig
	I1031 18:10:00.687293  491197 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/15242-478932/.minikube
	I1031 18:10:00.688893  491197 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1031 18:10:00.690566  491197 config.go:180] Loaded profile config "functional-180719": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.25.3
	I1031 18:10:00.690980  491197 main.go:134] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1031 18:10:00.691029  491197 main.go:134] libmachine: Launching plugin server for driver kvm2
	I1031 18:10:00.706226  491197 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:45769
	I1031 18:10:00.706625  491197 main.go:134] libmachine: () Calling .GetVersion
	I1031 18:10:00.707165  491197 main.go:134] libmachine: Using API Version  1
	I1031 18:10:00.707189  491197 main.go:134] libmachine: () Calling .SetConfigRaw
	I1031 18:10:00.707501  491197 main.go:134] libmachine: () Calling .GetMachineName
	I1031 18:10:00.707656  491197 main.go:134] libmachine: (functional-180719) Calling .DriverName
	I1031 18:10:00.707837  491197 driver.go:365] Setting default libvirt URI to qemu:///system
	I1031 18:10:00.708110  491197 main.go:134] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1031 18:10:00.708145  491197 main.go:134] libmachine: Launching plugin server for driver kvm2
	I1031 18:10:00.722899  491197 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:33241
	I1031 18:10:00.723263  491197 main.go:134] libmachine: () Calling .GetVersion
	I1031 18:10:00.723756  491197 main.go:134] libmachine: Using API Version  1
	I1031 18:10:00.723779  491197 main.go:134] libmachine: () Calling .SetConfigRaw
	I1031 18:10:00.724111  491197 main.go:134] libmachine: () Calling .GetMachineName
	I1031 18:10:00.724335  491197 main.go:134] libmachine: (functional-180719) Calling .DriverName
	I1031 18:10:00.756467  491197 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I1031 18:10:00.757913  491197 start.go:282] selected driver: kvm2
	I1031 18:10:00.757939  491197 start.go:808] validating driver "kvm2" against &{Name:functional-180719 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15159/minikube-v1.27.0-1666206003-15159-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.25.3 ClusterName:functional-180719 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.39.141 Port:8441 KubernetesVersion:v1.25.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics
-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1031 18:10:00.758093  491197 start.go:819] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1031 18:10:00.760590  491197 out.go:177] 
	W1031 18:10:00.762040  491197 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1031 18:10:00.763499  491197 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:847: (dbg) Run:  out/minikube-linux-amd64 -p functional-180719 status
functional_test.go:853: (dbg) Run:  out/minikube-linux-amd64 -p functional-180719 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:865: (dbg) Run:  out/minikube-linux-amd64 -p functional-180719 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.25s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd (12.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd
=== PAUSE TestFunctional/parallel/ServiceCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1433: (dbg) Run:  kubectl --context functional-180719 create deployment hello-node --image=k8s.gcr.io/echoserver:1.8

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1439: (dbg) Run:  kubectl --context functional-180719 expose deployment hello-node --type=NodePort --port=8080

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1444: (dbg) TestFunctional/parallel/ServiceCmd: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:342: "hello-node-5fcdfb5cc4-glhhh" [af6b1ecb-4375-4926-97e0-1d90896446b5] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
helpers_test.go:342: "hello-node-5fcdfb5cc4-glhhh" [af6b1ecb-4375-4926-97e0-1d90896446b5] Running

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1444: (dbg) TestFunctional/parallel/ServiceCmd: app=hello-node healthy within 11.019298055s
functional_test.go:1449: (dbg) Run:  out/minikube-linux-amd64 -p functional-180719 service list
functional_test.go:1463: (dbg) Run:  out/minikube-linux-amd64 -p functional-180719 service --namespace=default --https --url hello-node
functional_test.go:1476: found endpoint: https://192.168.39.141:31359
functional_test.go:1491: (dbg) Run:  out/minikube-linux-amd64 -p functional-180719 service hello-node --url --format={{.IP}}
functional_test.go:1505: (dbg) Run:  out/minikube-linux-amd64 -p functional-180719 service hello-node --url

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1511: found endpoint for hello-node: http://192.168.39.141:31359
--- PASS: TestFunctional/parallel/ServiceCmd (12.62s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1559: (dbg) Run:  kubectl --context functional-180719 create deployment hello-node-connect --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1565: (dbg) Run:  kubectl --context functional-180719 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1570: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:342: "hello-node-connect-6458c8fb6f-brnlk" [e3b1c539-bae8-47f5-b25a-308578680aae] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:342: "hello-node-connect-6458c8fb6f-brnlk" [e3b1c539-bae8-47f5-b25a-308578680aae] Running

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1570: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.010276889s
functional_test.go:1579: (dbg) Run:  out/minikube-linux-amd64 -p functional-180719 service hello-node-connect --url
functional_test.go:1585: found endpoint for hello-node-connect: http://192.168.39.141:31605
functional_test.go:1605: http://192.168.39.141:31605: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-6458c8fb6f-brnlk

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.141:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.141:31605
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.58s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1620: (dbg) Run:  out/minikube-linux-amd64 -p functional-180719 addons list
functional_test.go:1632: (dbg) Run:  out/minikube-linux-amd64 -p functional-180719 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (46.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:342: "storage-provisioner" [ba3824f1-2251-45c8-8c17-be9dc5a5d662] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.035036201s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-180719 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-180719 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-180719 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-180719 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:342: "sp-pod" [2a059f19-5aa0-4e27-ae15-3d3845e9811e] Pending
helpers_test.go:342: "sp-pod" [2a059f19-5aa0-4e27-ae15-3d3845e9811e] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:342: "sp-pod" [2a059f19-5aa0-4e27-ae15-3d3845e9811e] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 25.012175556s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-180719 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-180719 delete -f testdata/storage-provisioner/pod.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-180719 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:342: "sp-pod" [615521cd-9918-4d6e-8f44-d3a9ab8fa166] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:342: "sp-pod" [615521cd-9918-4d6e-8f44-d3a9ab8fa166] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:342: "sp-pod" [615521cd-9918-4d6e-8f44-d3a9ab8fa166] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 14.016632491s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-180719 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (46.06s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1655: (dbg) Run:  out/minikube-linux-amd64 -p functional-180719 ssh "echo hello"
functional_test.go:1672: (dbg) Run:  out/minikube-linux-amd64 -p functional-180719 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p functional-180719 cp testdata/cp-test.txt /home/docker/cp-test.txt

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p functional-180719 ssh -n functional-180719 "sudo cat /home/docker/cp-test.txt"

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p functional-180719 cp functional-180719:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1082962020/001/cp-test.txt

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p functional-180719 ssh -n functional-180719 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.11s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (30.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1720: (dbg) Run:  kubectl --context functional-180719 replace --force -f testdata/mysql.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1726: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:342: "mysql-596b7fcdbf-w8nr6" [68de0aaa-dbd2-46e7-9b9b-77c48cb81e15] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:342: "mysql-596b7fcdbf-w8nr6" [68de0aaa-dbd2-46e7-9b9b-77c48cb81e15] Running

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1726: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 24.014710008s
functional_test.go:1734: (dbg) Run:  kubectl --context functional-180719 exec mysql-596b7fcdbf-w8nr6 -- mysql -ppassword -e "show databases;"
functional_test.go:1734: (dbg) Non-zero exit: kubectl --context functional-180719 exec mysql-596b7fcdbf-w8nr6 -- mysql -ppassword -e "show databases;": exit status 1 (237.902501ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1734: (dbg) Run:  kubectl --context functional-180719 exec mysql-596b7fcdbf-w8nr6 -- mysql -ppassword -e "show databases;"
functional_test.go:1734: (dbg) Non-zero exit: kubectl --context functional-180719 exec mysql-596b7fcdbf-w8nr6 -- mysql -ppassword -e "show databases;": exit status 1 (193.017776ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1734: (dbg) Run:  kubectl --context functional-180719 exec mysql-596b7fcdbf-w8nr6 -- mysql -ppassword -e "show databases;"
functional_test.go:1734: (dbg) Non-zero exit: kubectl --context functional-180719 exec mysql-596b7fcdbf-w8nr6 -- mysql -ppassword -e "show databases;": exit status 1 (195.735819ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1734: (dbg) Run:  kubectl --context functional-180719 exec mysql-596b7fcdbf-w8nr6 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (30.33s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1856: Checking for existence of /etc/test/nested/copy/486314/hosts within VM
functional_test.go:1858: (dbg) Run:  out/minikube-linux-amd64 -p functional-180719 ssh "sudo cat /etc/test/nested/copy/486314/hosts"
functional_test.go:1863: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1899: Checking for existence of /etc/ssl/certs/486314.pem within VM
functional_test.go:1900: (dbg) Run:  out/minikube-linux-amd64 -p functional-180719 ssh "sudo cat /etc/ssl/certs/486314.pem"
functional_test.go:1899: Checking for existence of /usr/share/ca-certificates/486314.pem within VM
functional_test.go:1900: (dbg) Run:  out/minikube-linux-amd64 -p functional-180719 ssh "sudo cat /usr/share/ca-certificates/486314.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1899: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1900: (dbg) Run:  out/minikube-linux-amd64 -p functional-180719 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1926: Checking for existence of /etc/ssl/certs/4863142.pem within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-180719 ssh "sudo cat /etc/ssl/certs/4863142.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1926: Checking for existence of /usr/share/ca-certificates/4863142.pem within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-180719 ssh "sudo cat /usr/share/ca-certificates/4863142.pem"
functional_test.go:1926: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-180719 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.72s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:215: (dbg) Run:  kubectl --context functional-180719 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1954: (dbg) Run:  out/minikube-linux-amd64 -p functional-180719 ssh "sudo systemctl is-active docker"

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1954: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-180719 ssh "sudo systemctl is-active docker": exit status 1 (322.198696ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:1954: (dbg) Run:  out/minikube-linux-amd64 -p functional-180719 ssh "sudo systemctl is-active crio"
E1031 18:09:36.416145  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/addons-180051/client.crt: no such file or directory

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1954: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-180719 ssh "sudo systemctl is-active crio": exit status 1 (277.121151ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2215: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2183: (dbg) Run:  out/minikube-linux-amd64 -p functional-180719 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2197: (dbg) Run:  out/minikube-linux-amd64 -p functional-180719 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-180719 image ls --format short

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:262: (dbg) Stdout: out/minikube-linux-amd64 -p functional-180719 image ls --format short:
registry.k8s.io/pause:3.8
registry.k8s.io/kube-scheduler:v1.25.3
registry.k8s.io/kube-proxy:v1.25.3
registry.k8s.io/kube-controller-manager:v1.25.3
registry.k8s.io/kube-apiserver:v1.25.3
registry.k8s.io/etcd:3.5.4-0
registry.k8s.io/coredns/coredns:v1.9.3
k8s.gcr.io/pause:latest
k8s.gcr.io/pause:3.3
k8s.gcr.io/pause:3.1
k8s.gcr.io/echoserver:1.8
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-180719
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-180719
docker.io/kindest/kindnetd:v20221004-44d545d1
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-180719 image ls --format table
functional_test.go:262: (dbg) Stdout: out/minikube-linux-amd64 -p functional-180719 image ls --format table:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| docker.io/kindest/kindnetd                  | v20221004-44d545d1 | sha256:d6e3e2 | 25.8MB |
| docker.io/library/mysql                     | 5.7                | sha256:149052 | 144MB  |
| k8s.gcr.io/pause                            | 3.1                | sha256:da86e6 | 315kB  |
| k8s.gcr.io/pause                            | latest             | sha256:350b16 | 72.3kB |
| registry.k8s.io/kube-apiserver              | v1.25.3            | sha256:0346db | 34.2MB |
| registry.k8s.io/kube-controller-manager     | v1.25.3            | sha256:603999 | 31.3MB |
| registry.k8s.io/kube-proxy                  | v1.25.3            | sha256:beaaf0 | 20.3MB |
| docker.io/library/minikube-local-cache-test | functional-180719  | sha256:e4a053 | 1.74kB |
| docker.io/library/nginx                     | latest             | sha256:76c69f | 56.8MB |
| gcr.io/google-containers/addon-resizer      | functional-180719  | sha256:ffd4cf | 10.8MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:56cc51 | 2.4MB  |
| k8s.gcr.io/pause                            | 3.3                | sha256:0184c1 | 298kB  |
| registry.k8s.io/kube-scheduler              | v1.25.3            | sha256:6d23ec | 15.8MB |
| registry.k8s.io/etcd                        | 3.5.4-0            | sha256:a8a176 | 102MB  |
| registry.k8s.io/pause                       | 3.8                | sha256:487387 | 311kB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:6e38f4 | 9.06MB |
| k8s.gcr.io/echoserver                       | 1.8                | sha256:82e4c8 | 46.2MB |
| registry.k8s.io/coredns/coredns             | v1.9.3             | sha256:5185b9 | 14.8MB |
|---------------------------------------------|--------------------|---------------|--------|
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-180719 image ls --format json
functional_test.go:262: (dbg) Stdout: out/minikube-linux-amd64 -p functional-180719 image ls --format json:
[{"id":"sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["k8s.gcr.io/pause:latest"],"size":"72306"},{"id":"sha256:a8a176a5d5d698f9409dc246f81fa69d37d4a2f4132ba5e62e72a78476b27f66","repoDigests":["registry.k8s.io/etcd@sha256:6f72b851544986cb0921b53ea655ec04c36131248f16d4ad110cb3ca0c369dc1"],"repoTags":["registry.k8s.io/etcd:3.5.4-0"],"size":"102157811"},{"id":"sha256:d6e3e26021b60c625f0ef5b2dd3f9e22d2d398e05bccc4fdd7d59fbbb6a04d3f","repoDigests":["docker.io/kindest/kindnetd@sha256:273469d84ede51824194a31f6a405e3d3686b8b87cd161ea40f6bc3ff8e04ffe"],"repoTags":["docker.io/kindest/kindnetd:v20221004-44d545d1"],"size":"25830582"},{"id":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"9058936"},{"id":"sha256:0346dbd74bcb9485bb4da1b33027094
d79488470d8d1b9baa4d927db564e4fe0","repoDigests":["registry.k8s.io/kube-apiserver@sha256:4188262a351f156e8027ff81693d771c35b34b668cbd61e59c4a4490dd5c08f3"],"repoTags":["registry.k8s.io/kube-apiserver:v1.25.3"],"size":"34238163"},{"id":"sha256:6d23ec0e8b87eaaa698c3425c2c4d25f7329c587e9b39d967ab3f60048983912","repoDigests":["registry.k8s.io/kube-scheduler@sha256:f478aa916568b00269068ff1e9ff742ecc16192eb6e371e30f69f75df904162e"],"repoTags":["registry.k8s.io/kube-scheduler:v1.25.3"],"size":"15798744"},{"id":"sha256:e4a053dc630d51f5208bf4acf99f3104f2bd0ebe91a99bff70833673115194f7","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-180719"],"size":"1740"},{"id":"sha256:14905234a4ed471d6da5b7e09d9e9f62f4d350713e2b0e8c86652ebcbf710238","repoDigests":["docker.io/library/mysql@sha256:f5e2d4d7dccdc3f2a1d592bd3f0eb472b2f72f9fb942a84ff5b5cc049fe63a04"],"repoTags":["docker.io/library/mysql:5.7"],"size":"144343859"},{"id":"sha256:ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59c
f91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-180719"],"size":"10823156"},{"id":"sha256:5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a","repoDigests":["registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a"],"repoTags":["registry.k8s.io/coredns/coredns:v1.9.3"],"size":"14837849"},{"id":"sha256:60399923127581086e9029f30a0c9e3c88708efa8fc05d22d5e33887e7c0310a","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:d3a06262256f3e7578d5f77df137a8cdf58f9f498f35b5b56d116e8a7e31dc91"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.25.3"],"size":"31261869"},{"id":"sha256:beaaf00edd38a6cb405376588e708084376a6786e722231dc8a1482730e0c041","repoDigests":["registry.k8s.io/kube-proxy@sha256:6bf25f038543e1f433cb7f2bdda445ed348c7b9279935ebc2ae4f432308ed82f"],"repoTags":["registry.k8s.io/kube-proxy:v1.25.3"],"size":"20265805"},{"id":"sha256:76c69feac34e85768b284f84416c3546b240e8cb4f68acbbe5ad261a8b36f39f
","repoDigests":["docker.io/library/nginx@sha256:943c25b4b66b332184d5ba6bb18234273551593016c0e0ae906bab111548239f"],"repoTags":["docker.io/library/nginx:latest"],"size":"56841090"},{"id":"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"2395207"},{"id":"sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["k8s.gcr.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["k8s.gcr.io/echoserver:1.8"],"size":"46237695"},{"id":"sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.1"],"size":"315399"},{"id":"sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.3"],"size":"297686"},{"id":"sha256:48738
74c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517","repoDigests":["registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d"],"repoTags":["registry.k8s.io/pause:3.8"],"size":"311286"}]
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-180719 image ls --format yaml

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:262: (dbg) Stdout: out/minikube-linux-amd64 -p functional-180719 image ls --format yaml:
- id: sha256:d6e3e26021b60c625f0ef5b2dd3f9e22d2d398e05bccc4fdd7d59fbbb6a04d3f
repoDigests:
- docker.io/kindest/kindnetd@sha256:273469d84ede51824194a31f6a405e3d3686b8b87cd161ea40f6bc3ff8e04ffe
repoTags:
- docker.io/kindest/kindnetd:v20221004-44d545d1
size: "25830582"
- id: sha256:e4a053dc630d51f5208bf4acf99f3104f2bd0ebe91a99bff70833673115194f7
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-180719
size: "1740"
- id: sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- k8s.gcr.io/pause:latest
size: "72306"
- id: sha256:60399923127581086e9029f30a0c9e3c88708efa8fc05d22d5e33887e7c0310a
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:d3a06262256f3e7578d5f77df137a8cdf58f9f498f35b5b56d116e8a7e31dc91
repoTags:
- registry.k8s.io/kube-controller-manager:v1.25.3
size: "31261869"
- id: sha256:a8a176a5d5d698f9409dc246f81fa69d37d4a2f4132ba5e62e72a78476b27f66
repoDigests:
- registry.k8s.io/etcd@sha256:6f72b851544986cb0921b53ea655ec04c36131248f16d4ad110cb3ca0c369dc1
repoTags:
- registry.k8s.io/etcd:3.5.4-0
size: "102157811"
- id: sha256:0346dbd74bcb9485bb4da1b33027094d79488470d8d1b9baa4d927db564e4fe0
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:4188262a351f156e8027ff81693d771c35b34b668cbd61e59c4a4490dd5c08f3
repoTags:
- registry.k8s.io/kube-apiserver:v1.25.3
size: "34238163"
- id: sha256:6d23ec0e8b87eaaa698c3425c2c4d25f7329c587e9b39d967ab3f60048983912
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:f478aa916568b00269068ff1e9ff742ecc16192eb6e371e30f69f75df904162e
repoTags:
- registry.k8s.io/kube-scheduler:v1.25.3
size: "15798744"
- id: sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517
repoDigests:
- registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d
repoTags:
- registry.k8s.io/pause:3.8
size: "311286"
- id: sha256:ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-180719
size: "10823156"
- id: sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "2395207"
- id: sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.3
size: "297686"
- id: sha256:5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a
repoTags:
- registry.k8s.io/coredns/coredns:v1.9.3
size: "14837849"
- id: sha256:beaaf00edd38a6cb405376588e708084376a6786e722231dc8a1482730e0c041
repoDigests:
- registry.k8s.io/kube-proxy@sha256:6bf25f038543e1f433cb7f2bdda445ed348c7b9279935ebc2ae4f432308ed82f
repoTags:
- registry.k8s.io/kube-proxy:v1.25.3
size: "20265805"
- id: sha256:14905234a4ed471d6da5b7e09d9e9f62f4d350713e2b0e8c86652ebcbf710238
repoDigests:
- docker.io/library/mysql@sha256:f5e2d4d7dccdc3f2a1d592bd3f0eb472b2f72f9fb942a84ff5b5cc049fe63a04
repoTags:
- docker.io/library/mysql:5.7
size: "144343859"
- id: sha256:76c69feac34e85768b284f84416c3546b240e8cb4f68acbbe5ad261a8b36f39f
repoDigests:
- docker.io/library/nginx@sha256:943c25b4b66b332184d5ba6bb18234273551593016c0e0ae906bab111548239f
repoTags:
- docker.io/library/nginx:latest
size: "56841090"
- id: sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "9058936"
- id: sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- k8s.gcr.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- k8s.gcr.io/echoserver:1.8
size: "46237695"
- id: sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.1
size: "315399"

                                                
                                                
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p functional-180719 ssh pgrep buildkitd

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:304: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-180719 ssh pgrep buildkitd: exit status 1 (273.494318ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:311: (dbg) Run:  out/minikube-linux-amd64 -p functional-180719 image build -t localhost/my-image:functional-180719 testdata/build

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:311: (dbg) Done: out/minikube-linux-amd64 -p functional-180719 image build -t localhost/my-image:functional-180719 testdata/build: (4.177144882s)
functional_test.go:319: (dbg) Stderr: out/minikube-linux-amd64 -p functional-180719 image build -t localhost/my-image:functional-180719 testdata/build:
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.1s

                                                
                                                
#2 [internal] load .dockerignore
#2 transferring context: 2B done
#2 DONE 0.1s

                                                
                                                
#3 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#3 DONE 1.3s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.1s done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.2s
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.1s done
#5 DONE 0.4s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 1.3s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.2s

                                                
                                                
#8 exporting to image
#8 exporting layers
#8 exporting layers 0.2s done
#8 exporting manifest sha256:7a073469b74b85fc29b1e699a643173ba88d4d6c9bf7f4106fbea8642587863f 0.0s done
#8 exporting config sha256:1ad01b0e65dbe039931185a6ee17076a402f3f8cd1e3d4088f3c360f1ca0fe1c 0.0s done
#8 naming to localhost/my-image:functional-180719
#8 naming to localhost/my-image:functional-180719 done
#8 DONE 0.3s
functional_test.go:444: (dbg) Run:  out/minikube-linux-amd64 -p functional-180719 image ls
2022/10/31 18:10:14 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.68s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:338: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/Setup
functional_test.go:338: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.39431375s)
functional_test.go:343: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-180719
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.42s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2046: (dbg) Run:  out/minikube-linux-amd64 -p functional-180719 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2046: (dbg) Run:  out/minikube-linux-amd64 -p functional-180719 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2046: (dbg) Run:  out/minikube-linux-amd64 -p functional-180719 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p functional-180719 image load --daemon gcr.io/google-containers/addon-resizer:functional-180719

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:351: (dbg) Done: out/minikube-linux-amd64 -p functional-180719 image load --daemon gcr.io/google-containers/addon-resizer:functional-180719: (3.887602827s)
functional_test.go:444: (dbg) Run:  out/minikube-linux-amd64 -p functional-180719 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (4.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:361: (dbg) Run:  out/minikube-linux-amd64 -p functional-180719 image load --daemon gcr.io/google-containers/addon-resizer:functional-180719

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:361: (dbg) Done: out/minikube-linux-amd64 -p functional-180719 image load --daemon gcr.io/google-containers/addon-resizer:functional-180719: (4.704199505s)
functional_test.go:444: (dbg) Run:  out/minikube-linux-amd64 -p functional-180719 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (4.94s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:231: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:231: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.353591066s)
functional_test.go:236: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-180719
functional_test.go:241: (dbg) Run:  out/minikube-linux-amd64 -p functional-180719 image load --daemon gcr.io/google-containers/addon-resizer:functional-180719

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:241: (dbg) Done: out/minikube-linux-amd64 -p functional-180719 image load --daemon gcr.io/google-containers/addon-resizer:functional-180719: (4.623999041s)
functional_test.go:444: (dbg) Run:  out/minikube-linux-amd64 -p functional-180719 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:376: (dbg) Run:  out/minikube-linux-amd64 -p functional-180719 image save gcr.io/google-containers/addon-resizer:functional-180719 /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar
functional_test.go:376: (dbg) Done: out/minikube-linux-amd64 -p functional-180719 image save gcr.io/google-containers/addon-resizer:functional-180719 /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar: (1.049465447s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.05s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:388: (dbg) Run:  out/minikube-linux-amd64 -p functional-180719 image rm gcr.io/google-containers/addon-resizer:functional-180719
functional_test.go:444: (dbg) Run:  out/minikube-linux-amd64 -p functional-180719 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.85s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:405: (dbg) Run:  out/minikube-linux-amd64 -p functional-180719 image load /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:405: (dbg) Done: out/minikube-linux-amd64 -p functional-180719 image load /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar: (1.707513717s)
functional_test.go:444: (dbg) Run:  out/minikube-linux-amd64 -p functional-180719 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.97s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:415: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-180719
functional_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p functional-180719 image save --daemon gcr.io/google-containers/addon-resizer:functional-180719

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:420: (dbg) Done: out/minikube-linux-amd64 -p functional-180719 image save --daemon gcr.io/google-containers/addon-resizer:functional-180719: (2.005143086s)
functional_test.go:425: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-180719
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.06s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-amd64 profile lis

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1271: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (10.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:66: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-180719 /tmp/TestFunctionalparallelMountCmdany-port3978233940/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:100: wrote "test-1667239799178814074" to /tmp/TestFunctionalparallelMountCmdany-port3978233940/001/created-by-test
functional_test_mount_test.go:100: wrote "test-1667239799178814074" to /tmp/TestFunctionalparallelMountCmdany-port3978233940/001/created-by-test-removed-by-pod
functional_test_mount_test.go:100: wrote "test-1667239799178814074" to /tmp/TestFunctionalparallelMountCmdany-port3978233940/001/test-1667239799178814074
functional_test_mount_test.go:108: (dbg) Run:  out/minikube-linux-amd64 -p functional-180719 ssh "findmnt -T /mount-9p | grep 9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:108: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-180719 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (321.496353ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:108: (dbg) Run:  out/minikube-linux-amd64 -p functional-180719 ssh "findmnt -T /mount-9p | grep 9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:122: (dbg) Run:  out/minikube-linux-amd64 -p functional-180719 ssh -- ls -la /mount-9p

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:126: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct 31 18:09 created-by-test
-rw-r--r-- 1 docker docker 24 Oct 31 18:09 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct 31 18:09 test-1667239799178814074
functional_test_mount_test.go:130: (dbg) Run:  out/minikube-linux-amd64 -p functional-180719 ssh cat /mount-9p/test-1667239799178814074

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:141: (dbg) Run:  kubectl --context functional-180719 replace --force -f testdata/busybox-mount-test.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:146: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:342: "busybox-mount" [d81848b2-d3aa-407b-b026-de500c708a5a] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:342: "busybox-mount" [d81848b2-d3aa-407b-b026-de500c708a5a] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:342: "busybox-mount" [d81848b2-d3aa-407b-b026-de500c708a5a] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:342: "busybox-mount" [d81848b2-d3aa-407b-b026-de500c708a5a] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:146: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 8.013615592s
functional_test_mount_test.go:162: (dbg) Run:  kubectl --context functional-180719 logs busybox-mount

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:174: (dbg) Run:  out/minikube-linux-amd64 -p functional-180719 ssh stat /mount-9p/created-by-test

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:174: (dbg) Run:  out/minikube-linux-amd64 -p functional-180719 ssh stat /mount-9p/created-by-pod

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:83: (dbg) Run:  out/minikube-linux-amd64 -p functional-180719 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:87: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-180719 /tmp/TestFunctionalparallelMountCmdany-port3978233940/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (10.99s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1311: Took "287.82947ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1325: Took "78.584935ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-amd64 profile list -o json

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1362: Took "279.199091ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1375: Took "77.369119ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:206: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-180719 /tmp/TestFunctionalparallelMountCmdspecific-port2856759588/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:236: (dbg) Run:  out/minikube-linux-amd64 -p functional-180719 ssh "findmnt -T /mount-9p | grep 9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:236: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-180719 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (293.28091ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:236: (dbg) Run:  out/minikube-linux-amd64 -p functional-180719 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:250: (dbg) Run:  out/minikube-linux-amd64 -p functional-180719 ssh -- ls -la /mount-9p
functional_test_mount_test.go:254: guest mount directory contents
total 0
functional_test_mount_test.go:256: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-180719 /tmp/TestFunctionalparallelMountCmdspecific-port2856759588/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:257: reading mount text
functional_test_mount_test.go:271: done reading mount text
functional_test_mount_test.go:223: (dbg) Run:  out/minikube-linux-amd64 -p functional-180719 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:223: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-180719 ssh "sudo umount -f /mount-9p": exit status 1 (251.825389ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:225: "out/minikube-linux-amd64 -p functional-180719 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:227: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-180719 /tmp/TestFunctionalparallelMountCmdspecific-port2856759588/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.80s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.09s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:186: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:186: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-180719
--- PASS: TestFunctional/delete_addon-resizer_images (0.09s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:194: (dbg) Run:  docker rmi -f localhost/my-image:functional-180719
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:202: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-180719
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (75.96s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-181025 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd
E1031 18:10:58.337285  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/addons-180051/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-181025 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (1m15.962511121s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (75.96s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (15.4s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-181025 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-181025 addons enable ingress --alsologtostderr -v=5: (15.402033729s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (15.40s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.41s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-181025 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.41s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (41.42s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:165: (dbg) Run:  kubectl --context ingress-addon-legacy-181025 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:165: (dbg) Done: kubectl --context ingress-addon-legacy-181025 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (12.348357539s)
addons_test.go:185: (dbg) Run:  kubectl --context ingress-addon-legacy-181025 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:198: (dbg) Run:  kubectl --context ingress-addon-legacy-181025 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:203: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:342: "nginx" [f35d2395-d58e-4390-b5c9-9879f8cad436] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:342: "nginx" [f35d2395-d58e-4390-b5c9-9879f8cad436] Running
addons_test.go:203: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 10.007741932s
addons_test.go:215: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-181025 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:239: (dbg) Run:  kubectl --context ingress-addon-legacy-181025 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-181025 ip
addons_test.go:250: (dbg) Run:  nslookup hello-john.test 192.168.39.225
addons_test.go:259: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-181025 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:259: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-181025 addons disable ingress-dns --alsologtostderr -v=1: (10.557479959s)
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-181025 addons disable ingress --alsologtostderr -v=1
addons_test.go:264: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-181025 addons disable ingress --alsologtostderr -v=1: (7.382864512s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddons (41.42s)

                                                
                                    
x
+
TestJSONOutput/start/Command (109.57s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-181240 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=containerd
E1031 18:13:14.492917  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/addons-180051/client.crt: no such file or directory
E1031 18:13:42.178658  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/addons-180051/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-181240 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=containerd: (1m49.567234175s)
--- PASS: TestJSONOutput/start/Command (109.57s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.64s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-181240 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.64s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.59s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-181240 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.59s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (2.12s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-181240 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-181240 --output=json --user=testUser: (2.117972408s)
--- PASS: TestJSONOutput/stop/Command (2.12s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.27s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:149: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-181433 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-181433 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (90.447664ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"bac77290-0295-4f19-8468-e16d2580afbc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-181433] minikube v1.27.1 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"99f439fe-815c-42c4-97e0-8f32b41fd130","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=15242"}}
	{"specversion":"1.0","id":"beddd3f4-c67a-4973-b77c-51c25673f30c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"0eda10a8-f373-4037-b481-de588508d37d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/15242-478932/kubeconfig"}}
	{"specversion":"1.0","id":"965fc6d4-2fc5-4fea-be45-fc991e559d11","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/15242-478932/.minikube"}}
	{"specversion":"1.0","id":"97756ae7-3b24-4756-8677-be56818027a8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"51240282-832e-4e38-be3a-87269ae44163","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-181433" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-181433
--- PASS: TestErrorJSONOutput (0.27s)

                                                
                                    
x
+
TestMainNoArgs (0.07s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.07s)

                                                
                                    
x
+
TestMinikubeProfile (111.71s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-181433 --driver=kvm2  --container-runtime=containerd
E1031 18:14:36.087589  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/functional-180719/client.crt: no such file or directory
E1031 18:14:36.092899  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/functional-180719/client.crt: no such file or directory
E1031 18:14:36.103165  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/functional-180719/client.crt: no such file or directory
E1031 18:14:36.123457  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/functional-180719/client.crt: no such file or directory
E1031 18:14:36.163757  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/functional-180719/client.crt: no such file or directory
E1031 18:14:36.244085  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/functional-180719/client.crt: no such file or directory
E1031 18:14:36.404515  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/functional-180719/client.crt: no such file or directory
E1031 18:14:36.725099  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/functional-180719/client.crt: no such file or directory
E1031 18:14:37.366019  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/functional-180719/client.crt: no such file or directory
E1031 18:14:38.646801  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/functional-180719/client.crt: no such file or directory
E1031 18:14:41.208617  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/functional-180719/client.crt: no such file or directory
E1031 18:14:46.329178  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/functional-180719/client.crt: no such file or directory
E1031 18:14:56.570197  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/functional-180719/client.crt: no such file or directory
E1031 18:15:17.051248  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/functional-180719/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-181433 --driver=kvm2  --container-runtime=containerd: (54.684586339s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-181433 --driver=kvm2  --container-runtime=containerd
E1031 18:15:58.011733  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/functional-180719/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-181433 --driver=kvm2  --container-runtime=containerd: (54.000747837s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-181433
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-181433
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-181433" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-181433
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-181433: (1.052041277s)
helpers_test.go:175: Cleaning up "first-181433" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-181433
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-181433: (1.020526256s)
--- PASS: TestMinikubeProfile (111.71s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (26.9s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-181625 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-181625 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (25.900327634s)
--- PASS: TestMountStart/serial/StartWithMountFirst (26.90s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.42s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-181625 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-181625 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.42s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (26.89s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-181625 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd
E1031 18:16:57.677109  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/ingress-addon-legacy-181025/client.crt: no such file or directory
E1031 18:16:57.682405  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/ingress-addon-legacy-181025/client.crt: no such file or directory
E1031 18:16:57.692642  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/ingress-addon-legacy-181025/client.crt: no such file or directory
E1031 18:16:57.712932  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/ingress-addon-legacy-181025/client.crt: no such file or directory
E1031 18:16:57.753189  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/ingress-addon-legacy-181025/client.crt: no such file or directory
E1031 18:16:57.833523  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/ingress-addon-legacy-181025/client.crt: no such file or directory
E1031 18:16:57.993993  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/ingress-addon-legacy-181025/client.crt: no such file or directory
E1031 18:16:58.314609  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/ingress-addon-legacy-181025/client.crt: no such file or directory
E1031 18:16:58.955599  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/ingress-addon-legacy-181025/client.crt: no such file or directory
E1031 18:17:00.236083  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/ingress-addon-legacy-181025/client.crt: no such file or directory
E1031 18:17:02.797880  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/ingress-addon-legacy-181025/client.crt: no such file or directory
E1031 18:17:07.918787  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/ingress-addon-legacy-181025/client.crt: no such file or directory
E1031 18:17:18.159277  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/ingress-addon-legacy-181025/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-181625 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (25.894334188s)
--- PASS: TestMountStart/serial/StartWithMountSecond (26.89s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.43s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-181625 ssh -- ls /minikube-host
E1031 18:17:19.932952  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/functional-180719/client.crt: no such file or directory
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-181625 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.43s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.9s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-181625 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.90s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.44s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-181625 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-181625 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.44s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.2s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-181625
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-181625: (1.203168407s)
--- PASS: TestMountStart/serial/Stop (1.20s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (21.88s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-181625
E1031 18:17:38.639527  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/ingress-addon-legacy-181025/client.crt: no such file or directory
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-181625: (20.88261362s)
--- PASS: TestMountStart/serial/RestartStopped (21.88s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.44s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-181625 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-181625 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.44s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (151.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-181746 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E1031 18:18:14.492913  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/addons-180051/client.crt: no such file or directory
E1031 18:18:19.600163  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/ingress-addon-legacy-181025/client.crt: no such file or directory
E1031 18:19:36.087859  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/functional-180719/client.crt: no such file or directory
E1031 18:19:41.521373  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/ingress-addon-legacy-181025/client.crt: no such file or directory
E1031 18:20:03.773788  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/functional-180719/client.crt: no such file or directory
multinode_test.go:83: (dbg) Done: out/minikube-linux-amd64 start -p multinode-181746 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (2m30.827309065s)
multinode_test.go:89: (dbg) Run:  out/minikube-linux-amd64 -p multinode-181746 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (151.26s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-181746 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-181746 -- rollout status deployment/busybox
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-181746 -- rollout status deployment/busybox: (4.113007779s)
multinode_test.go:490: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-181746 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:502: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-181746 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:510: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-181746 -- exec busybox-65db55d5d6-7gvb5 -- nslookup kubernetes.io
multinode_test.go:510: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-181746 -- exec busybox-65db55d5d6-rblzj -- nslookup kubernetes.io
multinode_test.go:520: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-181746 -- exec busybox-65db55d5d6-7gvb5 -- nslookup kubernetes.default
multinode_test.go:520: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-181746 -- exec busybox-65db55d5d6-rblzj -- nslookup kubernetes.default
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-181746 -- exec busybox-65db55d5d6-7gvb5 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-181746 -- exec busybox-65db55d5d6-rblzj -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.95s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:538: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-181746 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-181746 -- exec busybox-65db55d5d6-7gvb5 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-181746 -- exec busybox-65db55d5d6-7gvb5 -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-181746 -- exec busybox-65db55d5d6-rblzj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-181746 -- exec busybox-65db55d5d6-rblzj -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.95s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (62.48s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:108: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-181746 -v 3 --alsologtostderr
multinode_test.go:108: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-181746 -v 3 --alsologtostderr: (1m1.869422984s)
multinode_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p multinode-181746 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (62.48s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:130: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.25s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (8.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p multinode-181746 status --output json --alsologtostderr
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-181746 cp testdata/cp-test.txt multinode-181746:/home/docker/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-181746 ssh -n multinode-181746 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-181746 cp multinode-181746:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2653831304/001/cp-test_multinode-181746.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-181746 ssh -n multinode-181746 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-181746 cp multinode-181746:/home/docker/cp-test.txt multinode-181746-m02:/home/docker/cp-test_multinode-181746_multinode-181746-m02.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-181746 ssh -n multinode-181746 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-181746 ssh -n multinode-181746-m02 "sudo cat /home/docker/cp-test_multinode-181746_multinode-181746-m02.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-181746 cp multinode-181746:/home/docker/cp-test.txt multinode-181746-m03:/home/docker/cp-test_multinode-181746_multinode-181746-m03.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-181746 ssh -n multinode-181746 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-181746 ssh -n multinode-181746-m03 "sudo cat /home/docker/cp-test_multinode-181746_multinode-181746-m03.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-181746 cp testdata/cp-test.txt multinode-181746-m02:/home/docker/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-181746 ssh -n multinode-181746-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-181746 cp multinode-181746-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2653831304/001/cp-test_multinode-181746-m02.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-181746 ssh -n multinode-181746-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-181746 cp multinode-181746-m02:/home/docker/cp-test.txt multinode-181746:/home/docker/cp-test_multinode-181746-m02_multinode-181746.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-181746 ssh -n multinode-181746-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-181746 ssh -n multinode-181746 "sudo cat /home/docker/cp-test_multinode-181746-m02_multinode-181746.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-181746 cp multinode-181746-m02:/home/docker/cp-test.txt multinode-181746-m03:/home/docker/cp-test_multinode-181746-m02_multinode-181746-m03.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-181746 ssh -n multinode-181746-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-181746 ssh -n multinode-181746-m03 "sudo cat /home/docker/cp-test_multinode-181746-m02_multinode-181746-m03.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-181746 cp testdata/cp-test.txt multinode-181746-m03:/home/docker/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-181746 ssh -n multinode-181746-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-181746 cp multinode-181746-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2653831304/001/cp-test_multinode-181746-m03.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-181746 ssh -n multinode-181746-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-181746 cp multinode-181746-m03:/home/docker/cp-test.txt multinode-181746:/home/docker/cp-test_multinode-181746-m03_multinode-181746.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-181746 ssh -n multinode-181746-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-181746 ssh -n multinode-181746 "sudo cat /home/docker/cp-test_multinode-181746-m03_multinode-181746.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-181746 cp multinode-181746-m03:/home/docker/cp-test.txt multinode-181746-m02:/home/docker/cp-test_multinode-181746-m03_multinode-181746-m02.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-181746 ssh -n multinode-181746-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-181746 ssh -n multinode-181746-m02 "sudo cat /home/docker/cp-test_multinode-181746-m03_multinode-181746-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (8.24s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:208: (dbg) Run:  out/minikube-linux-amd64 -p multinode-181746 node stop m03
multinode_test.go:208: (dbg) Done: out/minikube-linux-amd64 -p multinode-181746 node stop m03: (1.262570853s)
multinode_test.go:214: (dbg) Run:  out/minikube-linux-amd64 -p multinode-181746 status
multinode_test.go:214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-181746 status: exit status 7 (448.015556ms)

                                                
                                                
-- stdout --
	multinode-181746
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-181746-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-181746-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:221: (dbg) Run:  out/minikube-linux-amd64 -p multinode-181746 status --alsologtostderr
multinode_test.go:221: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-181746 status --alsologtostderr: exit status 7 (456.425975ms)

                                                
                                                
-- stdout --
	multinode-181746
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-181746-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-181746-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1031 18:21:37.157009  497310 out.go:296] Setting OutFile to fd 1 ...
	I1031 18:21:37.157130  497310 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 18:21:37.157136  497310 out.go:309] Setting ErrFile to fd 2...
	I1031 18:21:37.157142  497310 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 18:21:37.157446  497310 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15242-478932/.minikube/bin
	I1031 18:21:37.157726  497310 out.go:303] Setting JSON to false
	I1031 18:21:37.157754  497310 mustload.go:65] Loading cluster: multinode-181746
	I1031 18:21:37.157942  497310 notify.go:220] Checking for updates...
	I1031 18:21:37.158854  497310 config.go:180] Loaded profile config "multinode-181746": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.25.3
	I1031 18:21:37.158876  497310 status.go:255] checking status of multinode-181746 ...
	I1031 18:21:37.159308  497310 main.go:134] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1031 18:21:37.159350  497310 main.go:134] libmachine: Launching plugin server for driver kvm2
	I1031 18:21:37.176240  497310 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:35997
	I1031 18:21:37.176616  497310 main.go:134] libmachine: () Calling .GetVersion
	I1031 18:21:37.177306  497310 main.go:134] libmachine: Using API Version  1
	I1031 18:21:37.177333  497310 main.go:134] libmachine: () Calling .SetConfigRaw
	I1031 18:21:37.177790  497310 main.go:134] libmachine: () Calling .GetMachineName
	I1031 18:21:37.177996  497310 main.go:134] libmachine: (multinode-181746) Calling .GetState
	I1031 18:21:37.179694  497310 status.go:330] multinode-181746 host status = "Running" (err=<nil>)
	I1031 18:21:37.179713  497310 host.go:66] Checking if "multinode-181746" exists ...
	I1031 18:21:37.179986  497310 main.go:134] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1031 18:21:37.180029  497310 main.go:134] libmachine: Launching plugin server for driver kvm2
	I1031 18:21:37.196682  497310 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:43001
	I1031 18:21:37.197151  497310 main.go:134] libmachine: () Calling .GetVersion
	I1031 18:21:37.197612  497310 main.go:134] libmachine: Using API Version  1
	I1031 18:21:37.197637  497310 main.go:134] libmachine: () Calling .SetConfigRaw
	I1031 18:21:37.197929  497310 main.go:134] libmachine: () Calling .GetMachineName
	I1031 18:21:37.198120  497310 main.go:134] libmachine: (multinode-181746) Calling .GetIP
	I1031 18:21:37.200664  497310 main.go:134] libmachine: (multinode-181746) DBG | domain multinode-181746 has defined MAC address 52:54:00:54:fd:de in network mk-multinode-181746
	I1031 18:21:37.201072  497310 main.go:134] libmachine: (multinode-181746) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:fd:de", ip: ""} in network mk-multinode-181746: {Iface:virbr1 ExpiryTime:2022-10-31 19:18:01 +0000 UTC Type:0 Mac:52:54:00:54:fd:de Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:multinode-181746 Clientid:01:52:54:00:54:fd:de}
	I1031 18:21:37.201112  497310 main.go:134] libmachine: (multinode-181746) DBG | domain multinode-181746 has defined IP address 192.168.39.65 and MAC address 52:54:00:54:fd:de in network mk-multinode-181746
	I1031 18:21:37.201233  497310 host.go:66] Checking if "multinode-181746" exists ...
	I1031 18:21:37.201601  497310 main.go:134] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1031 18:21:37.201641  497310 main.go:134] libmachine: Launching plugin server for driver kvm2
	I1031 18:21:37.216812  497310 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:40053
	I1031 18:21:37.217210  497310 main.go:134] libmachine: () Calling .GetVersion
	I1031 18:21:37.217625  497310 main.go:134] libmachine: Using API Version  1
	I1031 18:21:37.217645  497310 main.go:134] libmachine: () Calling .SetConfigRaw
	I1031 18:21:37.217945  497310 main.go:134] libmachine: () Calling .GetMachineName
	I1031 18:21:37.218088  497310 main.go:134] libmachine: (multinode-181746) Calling .DriverName
	I1031 18:21:37.218263  497310 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1031 18:21:37.218300  497310 main.go:134] libmachine: (multinode-181746) Calling .GetSSHHostname
	I1031 18:21:37.220829  497310 main.go:134] libmachine: (multinode-181746) DBG | domain multinode-181746 has defined MAC address 52:54:00:54:fd:de in network mk-multinode-181746
	I1031 18:21:37.221241  497310 main.go:134] libmachine: (multinode-181746) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:fd:de", ip: ""} in network mk-multinode-181746: {Iface:virbr1 ExpiryTime:2022-10-31 19:18:01 +0000 UTC Type:0 Mac:52:54:00:54:fd:de Iaid: IPaddr:192.168.39.65 Prefix:24 Hostname:multinode-181746 Clientid:01:52:54:00:54:fd:de}
	I1031 18:21:37.221274  497310 main.go:134] libmachine: (multinode-181746) DBG | domain multinode-181746 has defined IP address 192.168.39.65 and MAC address 52:54:00:54:fd:de in network mk-multinode-181746
	I1031 18:21:37.221391  497310 main.go:134] libmachine: (multinode-181746) Calling .GetSSHPort
	I1031 18:21:37.221581  497310 main.go:134] libmachine: (multinode-181746) Calling .GetSSHKeyPath
	I1031 18:21:37.221745  497310 main.go:134] libmachine: (multinode-181746) Calling .GetSSHUsername
	I1031 18:21:37.221872  497310 sshutil.go:53] new ssh client: &{IP:192.168.39.65 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15242-478932/.minikube/machines/multinode-181746/id_rsa Username:docker}
	I1031 18:21:37.309579  497310 ssh_runner.go:195] Run: systemctl --version
	I1031 18:21:37.315226  497310 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1031 18:21:37.327856  497310 kubeconfig.go:92] found "multinode-181746" server: "https://192.168.39.65:8443"
	I1031 18:21:37.327892  497310 api_server.go:165] Checking apiserver status ...
	I1031 18:21:37.327921  497310 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 18:21:37.341699  497310 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1037/cgroup
	I1031 18:21:37.350298  497310 api_server.go:181] apiserver freezer: "9:freezer:/kubepods/burstable/podaba0378686f2659d279ccfb2bd43b53f/5a83ad9303cdcdef03d5ffc0935d406df1e4265f3a098ccbc3e7b651146f623f"
	I1031 18:21:37.350350  497310 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/podaba0378686f2659d279ccfb2bd43b53f/5a83ad9303cdcdef03d5ffc0935d406df1e4265f3a098ccbc3e7b651146f623f/freezer.state
	I1031 18:21:37.358268  497310 api_server.go:203] freezer state: "THAWED"
	I1031 18:21:37.358293  497310 api_server.go:252] Checking apiserver healthz at https://192.168.39.65:8443/healthz ...
	I1031 18:21:37.363661  497310 api_server.go:278] https://192.168.39.65:8443/healthz returned 200:
	ok
	I1031 18:21:37.363684  497310 status.go:421] multinode-181746 apiserver status = Running (err=<nil>)
	I1031 18:21:37.363694  497310 status.go:257] multinode-181746 status: &{Name:multinode-181746 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1031 18:21:37.363715  497310 status.go:255] checking status of multinode-181746-m02 ...
	I1031 18:21:37.364001  497310 main.go:134] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1031 18:21:37.364051  497310 main.go:134] libmachine: Launching plugin server for driver kvm2
	I1031 18:21:37.379766  497310 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:40987
	I1031 18:21:37.380191  497310 main.go:134] libmachine: () Calling .GetVersion
	I1031 18:21:37.380730  497310 main.go:134] libmachine: Using API Version  1
	I1031 18:21:37.380760  497310 main.go:134] libmachine: () Calling .SetConfigRaw
	I1031 18:21:37.381124  497310 main.go:134] libmachine: () Calling .GetMachineName
	I1031 18:21:37.381318  497310 main.go:134] libmachine: (multinode-181746-m02) Calling .GetState
	I1031 18:21:37.382938  497310 status.go:330] multinode-181746-m02 host status = "Running" (err=<nil>)
	I1031 18:21:37.382963  497310 host.go:66] Checking if "multinode-181746-m02" exists ...
	I1031 18:21:37.383217  497310 main.go:134] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1031 18:21:37.383249  497310 main.go:134] libmachine: Launching plugin server for driver kvm2
	I1031 18:21:37.397879  497310 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:45419
	I1031 18:21:37.398245  497310 main.go:134] libmachine: () Calling .GetVersion
	I1031 18:21:37.398706  497310 main.go:134] libmachine: Using API Version  1
	I1031 18:21:37.398732  497310 main.go:134] libmachine: () Calling .SetConfigRaw
	I1031 18:21:37.399036  497310 main.go:134] libmachine: () Calling .GetMachineName
	I1031 18:21:37.399211  497310 main.go:134] libmachine: (multinode-181746-m02) Calling .GetIP
	I1031 18:21:37.402092  497310 main.go:134] libmachine: (multinode-181746-m02) DBG | domain multinode-181746-m02 has defined MAC address 52:54:00:2d:89:c8 in network mk-multinode-181746
	I1031 18:21:37.402569  497310 main.go:134] libmachine: (multinode-181746-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:89:c8", ip: ""} in network mk-multinode-181746: {Iface:virbr1 ExpiryTime:2022-10-31 19:19:16 +0000 UTC Type:0 Mac:52:54:00:2d:89:c8 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:multinode-181746-m02 Clientid:01:52:54:00:2d:89:c8}
	I1031 18:21:37.402603  497310 main.go:134] libmachine: (multinode-181746-m02) DBG | domain multinode-181746-m02 has defined IP address 192.168.39.134 and MAC address 52:54:00:2d:89:c8 in network mk-multinode-181746
	I1031 18:21:37.402761  497310 host.go:66] Checking if "multinode-181746-m02" exists ...
	I1031 18:21:37.403045  497310 main.go:134] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1031 18:21:37.403085  497310 main.go:134] libmachine: Launching plugin server for driver kvm2
	I1031 18:21:37.418753  497310 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:36835
	I1031 18:21:37.419109  497310 main.go:134] libmachine: () Calling .GetVersion
	I1031 18:21:37.419523  497310 main.go:134] libmachine: Using API Version  1
	I1031 18:21:37.419544  497310 main.go:134] libmachine: () Calling .SetConfigRaw
	I1031 18:21:37.419834  497310 main.go:134] libmachine: () Calling .GetMachineName
	I1031 18:21:37.420022  497310 main.go:134] libmachine: (multinode-181746-m02) Calling .DriverName
	I1031 18:21:37.420191  497310 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1031 18:21:37.420213  497310 main.go:134] libmachine: (multinode-181746-m02) Calling .GetSSHHostname
	I1031 18:21:37.422749  497310 main.go:134] libmachine: (multinode-181746-m02) DBG | domain multinode-181746-m02 has defined MAC address 52:54:00:2d:89:c8 in network mk-multinode-181746
	I1031 18:21:37.423222  497310 main.go:134] libmachine: (multinode-181746-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:89:c8", ip: ""} in network mk-multinode-181746: {Iface:virbr1 ExpiryTime:2022-10-31 19:19:16 +0000 UTC Type:0 Mac:52:54:00:2d:89:c8 Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:multinode-181746-m02 Clientid:01:52:54:00:2d:89:c8}
	I1031 18:21:37.423256  497310 main.go:134] libmachine: (multinode-181746-m02) DBG | domain multinode-181746-m02 has defined IP address 192.168.39.134 and MAC address 52:54:00:2d:89:c8 in network mk-multinode-181746
	I1031 18:21:37.423369  497310 main.go:134] libmachine: (multinode-181746-m02) Calling .GetSSHPort
	I1031 18:21:37.423521  497310 main.go:134] libmachine: (multinode-181746-m02) Calling .GetSSHKeyPath
	I1031 18:21:37.423665  497310 main.go:134] libmachine: (multinode-181746-m02) Calling .GetSSHUsername
	I1031 18:21:37.423798  497310 sshutil.go:53] new ssh client: &{IP:192.168.39.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15242-478932/.minikube/machines/multinode-181746-m02/id_rsa Username:docker}
	I1031 18:21:37.513765  497310 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1031 18:21:37.526909  497310 status.go:257] multinode-181746-m02 status: &{Name:multinode-181746-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1031 18:21:37.526946  497310 status.go:255] checking status of multinode-181746-m03 ...
	I1031 18:21:37.527287  497310 main.go:134] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1031 18:21:37.527329  497310 main.go:134] libmachine: Launching plugin server for driver kvm2
	I1031 18:21:37.542895  497310 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:33919
	I1031 18:21:37.543335  497310 main.go:134] libmachine: () Calling .GetVersion
	I1031 18:21:37.543771  497310 main.go:134] libmachine: Using API Version  1
	I1031 18:21:37.543797  497310 main.go:134] libmachine: () Calling .SetConfigRaw
	I1031 18:21:37.544117  497310 main.go:134] libmachine: () Calling .GetMachineName
	I1031 18:21:37.544301  497310 main.go:134] libmachine: (multinode-181746-m03) Calling .GetState
	I1031 18:21:37.545798  497310 status.go:330] multinode-181746-m03 host status = "Stopped" (err=<nil>)
	I1031 18:21:37.545811  497310 status.go:343] host is not running, skipping remaining checks
	I1031 18:21:37.545826  497310 status.go:257] multinode-181746-m03 status: &{Name:multinode-181746-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.17s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (62.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:252: (dbg) Run:  out/minikube-linux-amd64 -p multinode-181746 node start m03 --alsologtostderr
E1031 18:21:57.677181  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/ingress-addon-legacy-181025/client.crt: no such file or directory
E1031 18:22:25.362121  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/ingress-addon-legacy-181025/client.crt: no such file or directory
multinode_test.go:252: (dbg) Done: out/minikube-linux-amd64 -p multinode-181746 node start m03 --alsologtostderr: (1m1.917510839s)
multinode_test.go:259: (dbg) Run:  out/minikube-linux-amd64 -p multinode-181746 status
multinode_test.go:273: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (62.58s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (522.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:281: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-181746
multinode_test.go:288: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-181746
E1031 18:23:14.493157  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/addons-180051/client.crt: no such file or directory
E1031 18:24:36.088034  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/functional-180719/client.crt: no such file or directory
E1031 18:24:37.539583  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/addons-180051/client.crt: no such file or directory
multinode_test.go:288: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-181746: (3m4.458824828s)
multinode_test.go:293: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-181746 --wait=true -v=8 --alsologtostderr
E1031 18:26:57.676054  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/ingress-addon-legacy-181025/client.crt: no such file or directory
E1031 18:28:14.492527  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/addons-180051/client.crt: no such file or directory
E1031 18:29:36.087489  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/functional-180719/client.crt: no such file or directory
E1031 18:30:59.134156  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/functional-180719/client.crt: no such file or directory
multinode_test.go:293: (dbg) Done: out/minikube-linux-amd64 start -p multinode-181746 --wait=true -v=8 --alsologtostderr: (5m37.475919773s)
multinode_test.go:298: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-181746
--- PASS: TestMultiNode/serial/RestartKeepsNodes (522.07s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p multinode-181746 node delete m03
multinode_test.go:392: (dbg) Done: out/minikube-linux-amd64 -p multinode-181746 node delete m03: (1.546067511s)
multinode_test.go:398: (dbg) Run:  out/minikube-linux-amd64 -p multinode-181746 status --alsologtostderr
multinode_test.go:422: (dbg) Run:  kubectl get nodes
multinode_test.go:430: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.12s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (183.51s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:312: (dbg) Run:  out/minikube-linux-amd64 -p multinode-181746 stop
E1031 18:31:57.676470  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/ingress-addon-legacy-181025/client.crt: no such file or directory
E1031 18:33:14.493115  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/addons-180051/client.crt: no such file or directory
E1031 18:33:20.724005  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/ingress-addon-legacy-181025/client.crt: no such file or directory
multinode_test.go:312: (dbg) Done: out/minikube-linux-amd64 -p multinode-181746 stop: (3m3.297552598s)
multinode_test.go:318: (dbg) Run:  out/minikube-linux-amd64 -p multinode-181746 status
multinode_test.go:318: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-181746 status: exit status 7 (108.447204ms)

                                                
                                                
-- stdout --
	multinode-181746
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-181746-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p multinode-181746 status --alsologtostderr
multinode_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-181746 status --alsologtostderr: exit status 7 (102.423727ms)

                                                
                                                
-- stdout --
	multinode-181746
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-181746-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1031 18:34:27.788076  498581 out.go:296] Setting OutFile to fd 1 ...
	I1031 18:34:27.788187  498581 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 18:34:27.788198  498581 out.go:309] Setting ErrFile to fd 2...
	I1031 18:34:27.788202  498581 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 18:34:27.788328  498581 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15242-478932/.minikube/bin
	I1031 18:34:27.788509  498581 out.go:303] Setting JSON to false
	I1031 18:34:27.788542  498581 mustload.go:65] Loading cluster: multinode-181746
	I1031 18:34:27.788579  498581 notify.go:220] Checking for updates...
	I1031 18:34:27.788914  498581 config.go:180] Loaded profile config "multinode-181746": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.25.3
	I1031 18:34:27.788934  498581 status.go:255] checking status of multinode-181746 ...
	I1031 18:34:27.789312  498581 main.go:134] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1031 18:34:27.789369  498581 main.go:134] libmachine: Launching plugin server for driver kvm2
	I1031 18:34:27.804388  498581 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:42009
	I1031 18:34:27.804800  498581 main.go:134] libmachine: () Calling .GetVersion
	I1031 18:34:27.805292  498581 main.go:134] libmachine: Using API Version  1
	I1031 18:34:27.805315  498581 main.go:134] libmachine: () Calling .SetConfigRaw
	I1031 18:34:27.805584  498581 main.go:134] libmachine: () Calling .GetMachineName
	I1031 18:34:27.805801  498581 main.go:134] libmachine: (multinode-181746) Calling .GetState
	I1031 18:34:27.807318  498581 status.go:330] multinode-181746 host status = "Stopped" (err=<nil>)
	I1031 18:34:27.807335  498581 status.go:343] host is not running, skipping remaining checks
	I1031 18:34:27.807341  498581 status.go:257] multinode-181746 status: &{Name:multinode-181746 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1031 18:34:27.807357  498581 status.go:255] checking status of multinode-181746-m02 ...
	I1031 18:34:27.807663  498581 main.go:134] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I1031 18:34:27.807692  498581 main.go:134] libmachine: Launching plugin server for driver kvm2
	I1031 18:34:27.821879  498581 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:44259
	I1031 18:34:27.822211  498581 main.go:134] libmachine: () Calling .GetVersion
	I1031 18:34:27.822637  498581 main.go:134] libmachine: Using API Version  1
	I1031 18:34:27.822661  498581 main.go:134] libmachine: () Calling .SetConfigRaw
	I1031 18:34:27.822975  498581 main.go:134] libmachine: () Calling .GetMachineName
	I1031 18:34:27.823144  498581 main.go:134] libmachine: (multinode-181746-m02) Calling .GetState
	I1031 18:34:27.824479  498581 status.go:330] multinode-181746-m02 host status = "Stopped" (err=<nil>)
	I1031 18:34:27.824491  498581 status.go:343] host is not running, skipping remaining checks
	I1031 18:34:27.824497  498581 status.go:257] multinode-181746-m02 status: &{Name:multinode-181746-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (183.51s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (281.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:352: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-181746 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E1031 18:34:36.087654  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/functional-180719/client.crt: no such file or directory
E1031 18:36:57.676305  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/ingress-addon-legacy-181025/client.crt: no such file or directory
E1031 18:38:14.493457  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/addons-180051/client.crt: no such file or directory
multinode_test.go:352: (dbg) Done: out/minikube-linux-amd64 start -p multinode-181746 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (4m40.455619678s)
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-181746 status --alsologtostderr
multinode_test.go:372: (dbg) Run:  kubectl get nodes
multinode_test.go:380: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (281.01s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (56.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:441: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-181746
multinode_test.go:450: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-181746-m02 --driver=kvm2  --container-runtime=containerd
multinode_test.go:450: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-181746-m02 --driver=kvm2  --container-runtime=containerd: exit status 14 (89.081055ms)

                                                
                                                
-- stdout --
	* [multinode-181746-m02] minikube v1.27.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15242
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15242-478932/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15242-478932/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-181746-m02' is duplicated with machine name 'multinode-181746-m02' in profile 'multinode-181746'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:458: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-181746-m03 --driver=kvm2  --container-runtime=containerd
E1031 18:39:36.088025  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/functional-180719/client.crt: no such file or directory
multinode_test.go:458: (dbg) Done: out/minikube-linux-amd64 start -p multinode-181746-m03 --driver=kvm2  --container-runtime=containerd: (54.578449257s)
multinode_test.go:465: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-181746
multinode_test.go:465: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-181746: exit status 80 (238.526697ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-181746
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: Node multinode-181746-m03 already exists in multinode-181746-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:470: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-181746-m03
multinode_test.go:470: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-181746-m03: (1.029396992s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (56.01s)

                                                
                                    
x
+
TestScheduledStopUnix (126.21s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-184248 --memory=2048 --driver=kvm2  --container-runtime=containerd
E1031 18:43:14.493444  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/addons-180051/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-184248 --memory=2048 --driver=kvm2  --container-runtime=containerd: (54.251792554s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-184248 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-184248 -n scheduled-stop-184248
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-184248 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-184248 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-184248 -n scheduled-stop-184248
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-184248
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-184248 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E1031 18:44:36.088062  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/functional-180719/client.crt: no such file or directory
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-184248
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-184248: exit status 7 (92.074116ms)

                                                
                                                
-- stdout --
	scheduled-stop-184248
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-184248 -n scheduled-stop-184248
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-184248 -n scheduled-stop-184248: exit status 7 (87.821903ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-184248" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-184248
--- PASS: TestScheduledStopUnix (126.21s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (124.1s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:127: (dbg) Run:  /tmp/minikube-v1.16.0.4059962449.exe start -p running-upgrade-184711 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:127: (dbg) Done: /tmp/minikube-v1.16.0.4059962449.exe start -p running-upgrade-184711 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd: (1m36.90939571s)
version_upgrade_test.go:137: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-184711 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:137: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-184711 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (24.876331321s)
helpers_test.go:175: Cleaning up "running-upgrade-184711" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-184711
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-184711: (1.601672223s)
--- PASS: TestRunningBinaryUpgrade (124.10s)

                                                
                                    
x
+
TestKubernetesUpgrade (257.12s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:229: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-184753 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
E1031 18:48:14.492446  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/addons-180051/client.crt: no such file or directory

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:229: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-184753 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m32.139054797s)
version_upgrade_test.go:234: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-184753
version_upgrade_test.go:234: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-184753: (2.428450377s)
version_upgrade_test.go:239: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-184753 status --format={{.Host}}
version_upgrade_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-184753 status --format={{.Host}}: exit status 7 (87.379684ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:241: status error: exit status 7 (may be ok)
version_upgrade_test.go:250: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-184753 --memory=2200 --kubernetes-version=v1.25.3 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
E1031 18:49:36.088023  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/functional-180719/client.crt: no such file or directory
E1031 18:50:00.725932  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/ingress-addon-legacy-181025/client.crt: no such file or directory

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:250: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-184753 --memory=2200 --kubernetes-version=v1.25.3 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (2m10.835984573s)
version_upgrade_test.go:255: (dbg) Run:  kubectl --context kubernetes-upgrade-184753 version --output=json
version_upgrade_test.go:274: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:276: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-184753 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:276: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-184753 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2  --container-runtime=containerd: exit status 106 (126.450417ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-184753] minikube v1.27.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15242
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15242-478932/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15242-478932/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.25.3 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-184753
	    minikube start -p kubernetes-upgrade-184753 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-1847532 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.25.3, by running:
	    
	    minikube start -p kubernetes-upgrade-184753 --kubernetes-version=v1.25.3
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:280: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:282: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-184753 --memory=2200 --kubernetes-version=v1.25.3 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:282: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-184753 --memory=2200 --kubernetes-version=v1.25.3 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (30.243165546s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-184753" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-184753
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-184753: (1.191809867s)
--- PASS: TestKubernetesUpgrade (257.12s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.12s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-184454 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=containerd

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-184454 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=containerd: exit status 14 (122.600282ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-184454] minikube v1.27.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15242
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15242-478932/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15242-478932/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.12s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (104.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-184454 --driver=kvm2  --container-runtime=containerd

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-184454 --driver=kvm2  --container-runtime=containerd: (1m43.960083662s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-184454 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (104.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:220: (dbg) Run:  out/minikube-linux-amd64 start -p false-184455 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=containerd
net_test.go:220: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-184455 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=containerd: exit status 14 (131.073465ms)

                                                
                                                
-- stdout --
	* [false-184455] minikube v1.27.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15242
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15242-478932/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15242-478932/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1031 18:44:55.273706  500800 out.go:296] Setting OutFile to fd 1 ...
	I1031 18:44:55.273815  500800 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 18:44:55.273825  500800 out.go:309] Setting ErrFile to fd 2...
	I1031 18:44:55.273833  500800 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 18:44:55.273950  500800 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15242-478932/.minikube/bin
	I1031 18:44:55.274505  500800 out.go:303] Setting JSON to false
	I1031 18:44:55.275396  500800 start.go:116] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":8848,"bootTime":1667233047,"procs":211,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1021-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1031 18:44:55.275467  500800 start.go:126] virtualization: kvm guest
	I1031 18:44:55.279175  500800 out.go:177] * [false-184455] minikube v1.27.1 on Ubuntu 20.04 (kvm/amd64)
	I1031 18:44:55.281215  500800 out.go:177]   - MINIKUBE_LOCATION=15242
	I1031 18:44:55.281173  500800 notify.go:220] Checking for updates...
	I1031 18:44:55.284014  500800 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1031 18:44:55.285457  500800 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/15242-478932/kubeconfig
	I1031 18:44:55.286815  500800 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/15242-478932/.minikube
	I1031 18:44:55.288297  500800 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1031 18:44:55.290062  500800 config.go:180] Loaded profile config "NoKubernetes-184454": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.25.3
	I1031 18:44:55.290164  500800 config.go:180] Loaded profile config "force-systemd-env-184454": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.25.3
	I1031 18:44:55.290243  500800 config.go:180] Loaded profile config "offline-containerd-184454": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.25.3
	I1031 18:44:55.290290  500800 driver.go:365] Setting default libvirt URI to qemu:///system
	I1031 18:44:55.324804  500800 out.go:177] * Using the kvm2 driver based on user configuration
	I1031 18:44:55.326180  500800 start.go:282] selected driver: kvm2
	I1031 18:44:55.326199  500800 start.go:808] validating driver "kvm2" against <nil>
	I1031 18:44:55.326214  500800 start.go:819] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1031 18:44:55.328342  500800 out.go:177] 
	W1031 18:44:55.329797  500800 out.go:239] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I1031 18:44:55.331256  500800 out.go:177] 

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "false-184455" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-184455
--- PASS: TestNetworkPlugins/group/false (0.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (44.97s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-184454 --no-kubernetes --driver=kvm2  --container-runtime=containerd
E1031 18:46:57.675777  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/ingress-addon-legacy-181025/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-184454 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (43.188182585s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-184454 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-184454 status -o json: exit status 2 (279.520556ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-184454","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-184454
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-184454: (1.498458592s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (44.97s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (50.16s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-184454 --no-kubernetes --driver=kvm2  --container-runtime=containerd

                                                
                                                
=== CONT  TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-184454 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (50.162801077s)
--- PASS: TestNoKubernetes/serial/Start (50.16s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-184454 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-184454 "sudo systemctl is-active --quiet service kubelet": exit status 1 (238.762233ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-184454
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-184454: (1.241450754s)
--- PASS: TestNoKubernetes/serial/Stop (1.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (40.39s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-184454 --driver=kvm2  --container-runtime=containerd

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-184454 --driver=kvm2  --container-runtime=containerd: (40.394164249s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (40.39s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-184454 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-184454 "sudo systemctl is-active --quiet service kubelet": exit status 1 (280.326225ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.6s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.60s)

                                                
                                    
x
+
TestPause/serial/Start (99.32s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-184916 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=containerd

                                                
                                                
=== CONT  TestPause/serial/Start
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-184916 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=containerd: (1m39.323049988s)
--- PASS: TestPause/serial/Start (99.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (131s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p auto-184454 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=kvm2  --container-runtime=containerd

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/Start
net_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p auto-184454 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=kvm2  --container-runtime=containerd: (2m10.995989946s)
--- PASS: TestNetworkPlugins/group/auto/Start (131.00s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (56.78s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-184916 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd

                                                
                                                
=== CONT  TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-184916 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (56.764657977s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (56.78s)

                                                
                                    
x
+
TestPause/serial/Pause (0.7s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-184916 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.70s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.29s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-184916 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-184916 --output=json --layout=cluster: exit status 2 (289.390393ms)

                                                
                                                
-- stdout --
	{"Name":"pause-184916","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.27.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-184916","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.29s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.63s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-184916 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.63s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.79s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-184916 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.79s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.01s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-184916 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-184916 --alsologtostderr -v=5: (1.013307116s)
--- PASS: TestPause/serial/DeletePaused (1.01s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.47s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Start (105.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Start
net_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p cilium-184455 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=kvm2  --container-runtime=containerd
E1031 18:51:57.675750  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/ingress-addon-legacy-181025/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/Start
net_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p cilium-184455 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=kvm2  --container-runtime=containerd: (1m45.10267166s)
--- PASS: TestNetworkPlugins/group/cilium/Start (105.10s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.74s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:213: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-184858
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (342.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p calico-184455 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=kvm2  --container-runtime=containerd

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/Start
net_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p calico-184455 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=kvm2  --container-runtime=containerd: (5m42.077386769s)
--- PASS: TestNetworkPlugins/group/calico/Start (342.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (155.69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-184455 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=containerd

                                                
                                                
=== CONT  TestNetworkPlugins/group/custom-flannel/Start
net_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-184455 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=containerd: (2m35.686971847s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (155.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-184454 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context auto-184454 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-5788d667bd-k4fwr" [f073f218-9865-49d9-8375-a826fcb1e068] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-5788d667bd-k4fwr" [f073f218-9865-49d9-8375-a826fcb1e068] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.016857262s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:169: (dbg) Run:  kubectl --context auto-184454 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:188: (dbg) Run:  kubectl --context auto-184454 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:238: (dbg) Run:  kubectl --context auto-184454 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (80.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-184455 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=kvm2  --container-runtime=containerd
E1031 18:53:14.492901  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/addons-180051/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/Start
net_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-184455 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=kvm2  --container-runtime=containerd: (1m20.924365478s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (80.92s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/ControllerPod (5.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/ControllerPod
net_test.go:109: (dbg) TestNetworkPlugins/group/cilium/ControllerPod: waiting 10m0s for pods matching "k8s-app=cilium" in namespace "kube-system" ...
helpers_test.go:342: "cilium-dlxq8" [1c5f256b-4163-447d-acaa-b607eb0d2384] Running
net_test.go:109: (dbg) TestNetworkPlugins/group/cilium/ControllerPod: k8s-app=cilium healthy within 5.037263403s
--- PASS: TestNetworkPlugins/group/cilium/ControllerPod (5.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/KubeletFlags (0.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-linux-amd64 ssh -p cilium-184455 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/cilium/KubeletFlags (0.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/NetCatPod (13.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context cilium-184455 replace --force -f testdata/netcat-deployment.yaml
net_test.go:138: (dbg) Done: kubectl --context cilium-184455 replace --force -f testdata/netcat-deployment.yaml: (2.384464252s)
net_test.go:152: (dbg) TestNetworkPlugins/group/cilium/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-5788d667bd-7g68h" [800c08f7-e553-4702-b9aa-00e54e1a7504] Pending
helpers_test.go:342: "netcat-5788d667bd-7g68h" [800c08f7-e553-4702-b9aa-00e54e1a7504] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-5788d667bd-7g68h" [800c08f7-e553-4702-b9aa-00e54e1a7504] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/cilium/NetCatPod: app=netcat healthy within 11.012079369s
--- PASS: TestNetworkPlugins/group/cilium/NetCatPod (13.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/DNS (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/DNS
net_test.go:169: (dbg) Run:  kubectl --context cilium-184455 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/cilium/DNS (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Localhost
net_test.go:188: (dbg) Run:  kubectl --context cilium-184455 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/cilium/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/HairPin
net_test.go:238: (dbg) Run:  kubectl --context cilium-184455 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/cilium/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (121.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-184455 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=flannel --driver=kvm2  --container-runtime=containerd

                                                
                                                
=== CONT  TestNetworkPlugins/group/flannel/Start
net_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p flannel-184455 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=flannel --driver=kvm2  --container-runtime=containerd: (2m1.88258992s)
--- PASS: TestNetworkPlugins/group/flannel/Start (121.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:109: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:342: "kindnet-kbh6n" [7bfadd6a-d91d-4b8c-b2fb-d045a7dcdc36] Running
E1031 18:54:36.087303  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/functional-180719/client.crt: no such file or directory
net_test.go:109: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.019214777s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-184455 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (13.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context kindnet-184455 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-5788d667bd-9fjq7" [d43d5dad-fbd3-4125-ab86-1e728c6961ee] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-5788d667bd-9fjq7" [d43d5dad-fbd3-4125-ab86-1e728c6961ee] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:152: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 13.070788323s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (13.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-184455 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context custom-flannel-184455 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-5788d667bd-r6hnf" [06d01fe0-22e1-493a-acc0-ba7220e629ef] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/custom-flannel/NetCatPod
helpers_test.go:342: "netcat-5788d667bd-r6hnf" [06d01fe0-22e1-493a-acc0-ba7220e629ef] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:152: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.009997089s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:169: (dbg) Run:  kubectl --context kindnet-184455 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:188: (dbg) Run:  kubectl --context kindnet-184455 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:238: (dbg) Run:  kubectl --context kindnet-184455 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (109.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-184455 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=kvm2  --container-runtime=containerd

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-184455 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=kvm2  --container-runtime=containerd: (1m49.504548104s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (109.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:169: (dbg) Run:  kubectl --context custom-flannel-184455 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:188: (dbg) Run:  kubectl --context custom-flannel-184455 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:238: (dbg) Run:  kubectl --context custom-flannel-184455 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (88.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-184455 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=kvm2  --container-runtime=containerd

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/Start
net_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p bridge-184455 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=kvm2  --container-runtime=containerd: (1m28.626567753s)
--- PASS: TestNetworkPlugins/group/bridge/Start (88.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:109: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-system" ...
helpers_test.go:342: "kube-flannel-ds-amd64-48n99" [65079587-9482-4cab-86e6-32ea5402ddfc] Running
net_test.go:109: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.021173067s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-184455 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (13.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context flannel-184455 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-5788d667bd-p598x" [9be67b10-241c-4d4a-a073-aa2685eaddfc] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-5788d667bd-p598x" [9be67b10-241c-4d4a-a073-aa2685eaddfc] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 13.012369247s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (13.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:169: (dbg) Run:  kubectl --context flannel-184455 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:188: (dbg) Run:  kubectl --context flannel-184455 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:238: (dbg) Run:  kubectl --context flannel-184455 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (135.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-185624 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.16.0

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-185624 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.16.0: (2m15.117496725s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (135.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-184455 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (12.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context bridge-184455 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-5788d667bd-jg6m7" [f2bae88b-0cfe-4c2e-ba0d-33de1f347eff] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-5788d667bd-jg6m7" [f2bae88b-0cfe-4c2e-ba0d-33de1f347eff] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 12.017560484s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (12.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (26.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Run:  kubectl --context bridge-184455 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Non-zero exit: kubectl --context bridge-184455 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.191883425s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:169: (dbg) Run:  kubectl --context bridge-184455 exec deployment/netcat -- nslookup kubernetes.default
E1031 18:56:57.676557  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/ingress-addon-legacy-181025/client.crt: no such file or directory
net_test.go:169: (dbg) Done: kubectl --context bridge-184455 exec deployment/netcat -- nslookup kubernetes.default: (10.217111842s)
--- PASS: TestNetworkPlugins/group/bridge/DNS (26.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-184455 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context enable-default-cni-184455 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-5788d667bd-nsg9q" [856d05c0-a406-43d6-825e-c5e8e0478c26] Pending
helpers_test.go:342: "netcat-5788d667bd-nsg9q" [856d05c0-a406-43d6-825e-c5e8e0478c26] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-5788d667bd-nsg9q" [856d05c0-a406-43d6-825e-c5e8e0478c26] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.007930606s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-184455 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:188: (dbg) Run:  kubectl --context enable-default-cni-184455 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:238: (dbg) Run:  kubectl --context enable-default-cni-184455 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (132.41s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-185653 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.25.3

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-185653 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.25.3: (2m12.412018537s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (132.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:188: (dbg) Run:  kubectl --context bridge-184455 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:238: (dbg) Run:  kubectl --context bridge-184455 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.17s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (84.33s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-185707 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.25.3

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-185707 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.25.3: (1m24.333955493s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (84.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:109: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:342: "calico-node-rwwjk" [2b2d82ba-43e7-48b6-aeba-c4e3abb40475] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
net_test.go:109: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.32649838s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-184455 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (13.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context calico-184455 replace --force -f testdata/netcat-deployment.yaml
E1031 18:57:57.541125  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/addons-180051/client.crt: no such file or directory
E1031 18:57:57.697592  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/auto-184454/client.crt: no such file or directory
E1031 18:57:57.702904  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/auto-184454/client.crt: no such file or directory
E1031 18:57:57.713173  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/auto-184454/client.crt: no such file or directory
E1031 18:57:57.733523  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/auto-184454/client.crt: no such file or directory
E1031 18:57:57.774233  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/auto-184454/client.crt: no such file or directory
net_test.go:138: (dbg) Done: kubectl --context calico-184455 replace --force -f testdata/netcat-deployment.yaml: (1.163401151s)
net_test.go:152: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-5788d667bd-x9hrs" [9cc76fb6-2ade-4da5-8daf-5a899a1a34c0] Pending
E1031 18:57:57.854388  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/auto-184454/client.crt: no such file or directory
E1031 18:57:58.015158  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/auto-184454/client.crt: no such file or directory
E1031 18:57:58.335314  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/auto-184454/client.crt: no such file or directory
helpers_test.go:342: "netcat-5788d667bd-x9hrs" [9cc76fb6-2ade-4da5-8daf-5a899a1a34c0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1031 18:57:58.976063  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/auto-184454/client.crt: no such file or directory
E1031 18:58:00.256511  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/auto-184454/client.crt: no such file or directory
E1031 18:58:02.817157  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/auto-184454/client.crt: no such file or directory
helpers_test.go:342: "netcat-5788d667bd-x9hrs" [9cc76fb6-2ade-4da5-8daf-5a899a1a34c0] Running
E1031 18:58:07.937637  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/auto-184454/client.crt: no such file or directory
net_test.go:152: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.013234583s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (13.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:169: (dbg) Run:  kubectl --context calico-184455 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:188: (dbg) Run:  kubectl --context calico-184455 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:238: (dbg) Run:  kubectl --context calico-184455 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.27s)
E1031 19:06:57.675824  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/ingress-addon-legacy-181025/client.crt: no such file or directory

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (116.56s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-185811 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.25.3
E1031 18:58:14.493140  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/addons-180051/client.crt: no such file or directory
E1031 18:58:18.178743  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/auto-184454/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-185811 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.25.3: (1m56.56128546s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (116.56s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (11.44s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-185707 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [7eabb715-55b1-4edc-a4d8-8164626f59e3] Pending
helpers_test.go:342: "busybox" [7eabb715-55b1-4edc-a4d8-8164626f59e3] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [7eabb715-55b1-4edc-a4d8-8164626f59e3] Running
E1031 18:58:38.659533  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/auto-184454/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 11.017349119s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-185707 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (11.44s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.38s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-185624 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [335892cc-e60d-4a33-8ad7-bc322f36ad80] Pending
helpers_test.go:342: "busybox" [335892cc-e60d-4a33-8ad7-bc322f36ad80] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1031 18:58:41.293242  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/cilium-184455/client.crt: no such file or directory
E1031 18:58:41.298579  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/cilium-184455/client.crt: no such file or directory
E1031 18:58:41.308714  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/cilium-184455/client.crt: no such file or directory
E1031 18:58:41.328979  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/cilium-184455/client.crt: no such file or directory
E1031 18:58:41.369307  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/cilium-184455/client.crt: no such file or directory
E1031 18:58:41.449623  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/cilium-184455/client.crt: no such file or directory
E1031 18:58:41.610105  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/cilium-184455/client.crt: no such file or directory
E1031 18:58:41.930791  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/cilium-184455/client.crt: no such file or directory
E1031 18:58:42.571722  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/cilium-184455/client.crt: no such file or directory
helpers_test.go:342: "busybox" [335892cc-e60d-4a33-8ad7-bc322f36ad80] Running

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.020971865s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-185624 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (4.5s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-185707 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1031 18:58:43.852906  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/cilium-184455/client.crt: no such file or directory
E1031 18:58:46.413185  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/cilium-184455/client.crt: no such file or directory
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-185707 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (4.4017606s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-185707 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (4.50s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (102.43s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-185707 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-185707 --alsologtostderr -v=3: (1m42.425713725s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (102.43s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.69s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-185624 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-185624 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.69s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (92.41s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-185624 --alsologtostderr -v=3
E1031 18:58:51.534198  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/cilium-184455/client.crt: no such file or directory
E1031 18:59:01.774608  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/cilium-184455/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-185624 --alsologtostderr -v=3: (1m32.405703481s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (92.41s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.36s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-185653 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [60084ff2-9450-403a-afc4-ce6ec473b170] Pending
helpers_test.go:342: "busybox" [60084ff2-9450-403a-afc4-ce6ec473b170] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [60084ff2-9450-403a-afc4-ce6ec473b170] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.016776357s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-185653 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.36s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.87s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-185653 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-185653 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.87s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (92.41s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-185653 --alsologtostderr -v=3
E1031 18:59:19.620584  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/auto-184454/client.crt: no such file or directory
E1031 18:59:22.255307  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/cilium-184455/client.crt: no such file or directory
E1031 18:59:31.281465  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/kindnet-184455/client.crt: no such file or directory
E1031 18:59:31.286768  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/kindnet-184455/client.crt: no such file or directory
E1031 18:59:31.297141  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/kindnet-184455/client.crt: no such file or directory
E1031 18:59:31.317398  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/kindnet-184455/client.crt: no such file or directory
E1031 18:59:31.357692  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/kindnet-184455/client.crt: no such file or directory
E1031 18:59:31.438058  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/kindnet-184455/client.crt: no such file or directory
E1031 18:59:31.598578  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/kindnet-184455/client.crt: no such file or directory
E1031 18:59:31.919317  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/kindnet-184455/client.crt: no such file or directory
E1031 18:59:32.559782  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/kindnet-184455/client.crt: no such file or directory
E1031 18:59:33.840634  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/kindnet-184455/client.crt: no such file or directory
E1031 18:59:36.087285  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/functional-180719/client.crt: no such file or directory
E1031 18:59:36.401789  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/kindnet-184455/client.crt: no such file or directory
E1031 18:59:41.522564  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/kindnet-184455/client.crt: no such file or directory
E1031 18:59:46.495056  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/custom-flannel-184455/client.crt: no such file or directory
E1031 18:59:46.500320  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/custom-flannel-184455/client.crt: no such file or directory
E1031 18:59:46.510578  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/custom-flannel-184455/client.crt: no such file or directory
E1031 18:59:46.530829  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/custom-flannel-184455/client.crt: no such file or directory
E1031 18:59:46.571084  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/custom-flannel-184455/client.crt: no such file or directory
E1031 18:59:46.651409  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/custom-flannel-184455/client.crt: no such file or directory
E1031 18:59:46.812396  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/custom-flannel-184455/client.crt: no such file or directory
E1031 18:59:47.132995  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/custom-flannel-184455/client.crt: no such file or directory
E1031 18:59:47.774092  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/custom-flannel-184455/client.crt: no such file or directory
E1031 18:59:49.055076  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/custom-flannel-184455/client.crt: no such file or directory
E1031 18:59:51.615600  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/custom-flannel-184455/client.crt: no such file or directory
E1031 18:59:51.762833  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/kindnet-184455/client.crt: no such file or directory
E1031 18:59:56.735931  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/custom-flannel-184455/client.crt: no such file or directory
E1031 19:00:03.216269  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/cilium-184455/client.crt: no such file or directory
E1031 19:00:06.976721  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/custom-flannel-184455/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-185653 --alsologtostderr -v=3: (1m32.405195556s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (92.41s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.37s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-185811 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [51a45e38-6832-43db-a95d-8b0b771a7774] Pending
helpers_test.go:342: "busybox" [51a45e38-6832-43db-a95d-8b0b771a7774] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1031 19:00:12.243803  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/kindnet-184455/client.crt: no such file or directory
helpers_test.go:342: "busybox" [51a45e38-6832-43db-a95d-8b0b771a7774] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.02387924s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-185811 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.37s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.79s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-185811 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-185811 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.79s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (91.75s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-185811 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-185811 --alsologtostderr -v=3: (1m31.748838679s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (91.75s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-185624 -n old-k8s-version-185624
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-185624 -n old-k8s-version-185624: exit status 7 (84.322691ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-185624 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (378.95s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-185624 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.16.0
E1031 19:00:27.457312  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/custom-flannel-184455/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-185624 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.16.0: (6m18.598593018s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-185624 -n old-k8s-version-185624
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (378.95s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-185707 -n embed-certs-185707
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-185707 -n embed-certs-185707: exit status 7 (99.651293ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-185707 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (385.55s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-185707 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.25.3
E1031 19:00:41.541072  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/auto-184454/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-185707 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.25.3: (6m25.093625021s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-185707 -n embed-certs-185707
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (385.55s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-185653 -n no-preload-185653
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-185653 -n no-preload-185653: exit status 7 (96.039054ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-185653 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (330.4s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-185653 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.25.3
E1031 19:00:53.204598  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/kindnet-184455/client.crt: no such file or directory
E1031 19:01:04.139424  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/flannel-184455/client.crt: no such file or directory
E1031 19:01:04.144714  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/flannel-184455/client.crt: no such file or directory
E1031 19:01:04.155095  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/flannel-184455/client.crt: no such file or directory
E1031 19:01:04.175369  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/flannel-184455/client.crt: no such file or directory
E1031 19:01:04.215712  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/flannel-184455/client.crt: no such file or directory
E1031 19:01:04.296091  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/flannel-184455/client.crt: no such file or directory
E1031 19:01:04.456566  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/flannel-184455/client.crt: no such file or directory
E1031 19:01:04.776967  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/flannel-184455/client.crt: no such file or directory
E1031 19:01:05.417569  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/flannel-184455/client.crt: no such file or directory
E1031 19:01:06.698477  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/flannel-184455/client.crt: no such file or directory
E1031 19:01:08.418413  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/custom-flannel-184455/client.crt: no such file or directory
E1031 19:01:09.259638  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/flannel-184455/client.crt: no such file or directory
E1031 19:01:14.380397  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/flannel-184455/client.crt: no such file or directory
E1031 19:01:24.620981  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/flannel-184455/client.crt: no such file or directory
E1031 19:01:25.137373  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/cilium-184455/client.crt: no such file or directory
E1031 19:01:27.385516  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/bridge-184455/client.crt: no such file or directory
E1031 19:01:27.390840  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/bridge-184455/client.crt: no such file or directory
E1031 19:01:27.401087  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/bridge-184455/client.crt: no such file or directory
E1031 19:01:27.421360  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/bridge-184455/client.crt: no such file or directory
E1031 19:01:27.462506  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/bridge-184455/client.crt: no such file or directory
E1031 19:01:27.542887  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/bridge-184455/client.crt: no such file or directory
E1031 19:01:27.703970  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/bridge-184455/client.crt: no such file or directory
E1031 19:01:28.024891  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/bridge-184455/client.crt: no such file or directory
E1031 19:01:28.665230  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/bridge-184455/client.crt: no such file or directory
E1031 19:01:29.945956  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/bridge-184455/client.crt: no such file or directory
E1031 19:01:32.506960  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/bridge-184455/client.crt: no such file or directory
E1031 19:01:37.627738  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/bridge-184455/client.crt: no such file or directory
E1031 19:01:41.771513  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/enable-default-cni-184455/client.crt: no such file or directory
E1031 19:01:41.776774  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/enable-default-cni-184455/client.crt: no such file or directory
E1031 19:01:41.787013  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/enable-default-cni-184455/client.crt: no such file or directory
E1031 19:01:41.807265  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/enable-default-cni-184455/client.crt: no such file or directory
E1031 19:01:41.847561  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/enable-default-cni-184455/client.crt: no such file or directory
E1031 19:01:41.927865  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/enable-default-cni-184455/client.crt: no such file or directory
E1031 19:01:42.088477  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/enable-default-cni-184455/client.crt: no such file or directory
E1031 19:01:42.409049  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/enable-default-cni-184455/client.crt: no such file or directory
E1031 19:01:43.049741  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/enable-default-cni-184455/client.crt: no such file or directory
E1031 19:01:44.330580  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/enable-default-cni-184455/client.crt: no such file or directory
E1031 19:01:45.101609  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/flannel-184455/client.crt: no such file or directory
E1031 19:01:46.891747  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/enable-default-cni-184455/client.crt: no such file or directory
E1031 19:01:47.868709  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/bridge-184455/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-185653 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.25.3: (5m30.030206382s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-185653 -n no-preload-185653
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (330.40s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-185811 -n default-k8s-diff-port-185811
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-185811 -n default-k8s-diff-port-185811: exit status 7 (90.037224ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-185811 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (391.37s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-185811 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.25.3
E1031 19:01:52.012538  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/enable-default-cni-184455/client.crt: no such file or directory
E1031 19:01:57.676216  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/ingress-addon-legacy-181025/client.crt: no such file or directory
E1031 19:02:02.253289  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/enable-default-cni-184455/client.crt: no such file or directory
E1031 19:02:08.349513  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/bridge-184455/client.crt: no such file or directory
E1031 19:02:15.124883  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/kindnet-184455/client.crt: no such file or directory
E1031 19:02:22.733802  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/enable-default-cni-184455/client.crt: no such file or directory
E1031 19:02:26.062802  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/flannel-184455/client.crt: no such file or directory
E1031 19:02:30.339004  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/custom-flannel-184455/client.crt: no such file or directory
E1031 19:02:49.310662  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/bridge-184455/client.crt: no such file or directory
E1031 19:02:50.872607  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/calico-184455/client.crt: no such file or directory
E1031 19:02:50.877899  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/calico-184455/client.crt: no such file or directory
E1031 19:02:50.888209  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/calico-184455/client.crt: no such file or directory
E1031 19:02:50.908452  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/calico-184455/client.crt: no such file or directory
E1031 19:02:50.948730  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/calico-184455/client.crt: no such file or directory
E1031 19:02:51.029045  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/calico-184455/client.crt: no such file or directory
E1031 19:02:51.189473  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/calico-184455/client.crt: no such file or directory
E1031 19:02:51.510398  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/calico-184455/client.crt: no such file or directory
E1031 19:02:52.151100  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/calico-184455/client.crt: no such file or directory
E1031 19:02:53.431738  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/calico-184455/client.crt: no such file or directory
E1031 19:02:55.991971  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/calico-184455/client.crt: no such file or directory
E1031 19:02:57.697309  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/auto-184454/client.crt: no such file or directory
E1031 19:03:01.112764  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/calico-184455/client.crt: no such file or directory
E1031 19:03:03.694524  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/enable-default-cni-184455/client.crt: no such file or directory
E1031 19:03:11.353681  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/calico-184455/client.crt: no such file or directory
E1031 19:03:14.493286  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/addons-180051/client.crt: no such file or directory
E1031 19:03:25.381406  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/auto-184454/client.crt: no such file or directory
E1031 19:03:31.834788  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/calico-184455/client.crt: no such file or directory
E1031 19:03:41.292903  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/cilium-184455/client.crt: no such file or directory
E1031 19:03:47.983378  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/flannel-184455/client.crt: no such file or directory
E1031 19:04:08.978589  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/cilium-184455/client.crt: no such file or directory
E1031 19:04:11.231804  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/bridge-184455/client.crt: no such file or directory
E1031 19:04:12.795479  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/calico-184455/client.crt: no such file or directory
E1031 19:04:19.135058  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/functional-180719/client.crt: no such file or directory
E1031 19:04:25.615301  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/enable-default-cni-184455/client.crt: no such file or directory
E1031 19:04:31.281045  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/kindnet-184455/client.crt: no such file or directory
E1031 19:04:36.087035  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/functional-180719/client.crt: no such file or directory
E1031 19:04:46.494209  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/custom-flannel-184455/client.crt: no such file or directory
E1031 19:04:58.965720  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/kindnet-184455/client.crt: no such file or directory
E1031 19:05:14.180053  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/custom-flannel-184455/client.crt: no such file or directory
E1031 19:05:34.715729  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/calico-184455/client.crt: no such file or directory
E1031 19:06:04.139576  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/flannel-184455/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-185811 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.25.3: (6m30.977924215s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-185811 -n default-k8s-diff-port-185811
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (391.37s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (12.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-57bbdc5f89-9snb6" [7ccec93c-2cac-4425-ac3f-0f56704e46af] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:342: "kubernetes-dashboard-57bbdc5f89-9snb6" [7ccec93c-2cac-4425-ac3f-0f56704e46af] Running
E1031 19:06:27.385514  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/bridge-184455/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 12.016446841s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (12.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-57bbdc5f89-9snb6" [7ccec93c-2cac-4425-ac3f-0f56704e46af] Running
E1031 19:06:31.823978  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/flannel-184455/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.010926346s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-185653 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p no-preload-185653 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.74s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-185653 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-185653 -n no-preload-185653
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-185653 -n no-preload-185653: exit status 2 (282.628857ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-185653 -n no-preload-185653
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-185653 -n no-preload-185653: exit status 2 (285.250771ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-185653 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-185653 -n no-preload-185653
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-185653 -n no-preload-185653
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.74s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (69.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-190640 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.25.3
E1031 19:06:40.726910  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/ingress-addon-legacy-181025/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-190640 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.25.3: (1m9.060647299s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (69.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-59d54d6bc8-26452" [55312d70-b7e5-4e2f-a1d2-f0ef42b55b6b] Running
E1031 19:06:41.772134  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/enable-default-cni-184455/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.018322248s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-59d54d6bc8-26452" [55312d70-b7e5-4e2f-a1d2-f0ef42b55b6b] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.007863479s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-185624 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.36s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p old-k8s-version-185624 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.36s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-185624 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-185624 -n old-k8s-version-185624
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-185624 -n old-k8s-version-185624: exit status 2 (320.564705ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-185624 -n old-k8s-version-185624
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-185624 -n old-k8s-version-185624: exit status 2 (319.223445ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-185624 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-185624 -n old-k8s-version-185624
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-185624 -n old-k8s-version-185624
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (14.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
helpers_test.go:342: "kubernetes-dashboard-57bbdc5f89-mk74r" [b10f7932-88d3-4f09-81ed-c6b360cc6cb4] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
helpers_test.go:342: "kubernetes-dashboard-57bbdc5f89-mk74r" [b10f7932-88d3-4f09-81ed-c6b360cc6cb4] Running
E1031 19:07:09.455630  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/enable-default-cni-184455/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 14.027958156s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (14.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-57bbdc5f89-mk74r" [b10f7932-88d3-4f09-81ed-c6b360cc6cb4] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.010370569s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-185707 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.41s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p embed-certs-185707 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20221004-44d545d1
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.41s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.56s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-185707 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-amd64 pause -p embed-certs-185707 --alsologtostderr -v=1: (1.229390346s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-185707 -n embed-certs-185707
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-185707 -n embed-certs-185707: exit status 2 (281.477237ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-185707 -n embed-certs-185707
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-185707 -n embed-certs-185707: exit status 2 (276.815415ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-185707 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-amd64 unpause -p embed-certs-185707 --alsologtostderr -v=1: (1.034048068s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-185707 -n embed-certs-185707
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-185707 -n embed-certs-185707
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.56s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.83s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-190640 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.83s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (5.14s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-190640 --alsologtostderr -v=3
E1031 19:07:50.872099  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/calico-184455/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-190640 --alsologtostderr -v=3: (5.135171004s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (5.14s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-190640 -n newest-cni-190640
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-190640 -n newest-cni-190640: exit status 7 (93.109638ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-190640 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (107.85s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-190640 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.25.3
E1031 19:07:57.697217  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/auto-184454/client.crt: no such file or directory
E1031 19:08:14.493125  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/addons-180051/client.crt: no such file or directory
E1031 19:08:18.556349  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/calico-184455/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-190640 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.25.3: (1m47.563671383s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-190640 -n newest-cni-190640
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (107.85s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (21.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-57bbdc5f89-mwn6j" [756423a7-d89e-491c-878d-245368c8ca3d] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:342: "kubernetes-dashboard-57bbdc5f89-mwn6j" [756423a7-d89e-491c-878d-245368c8ca3d] Running
E1031 19:08:39.731103  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/old-k8s-version-185624/client.crt: no such file or directory
E1031 19:08:39.736424  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/old-k8s-version-185624/client.crt: no such file or directory
E1031 19:08:39.746658  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/old-k8s-version-185624/client.crt: no such file or directory
E1031 19:08:39.766916  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/old-k8s-version-185624/client.crt: no such file or directory
E1031 19:08:39.807170  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/old-k8s-version-185624/client.crt: no such file or directory
E1031 19:08:39.887537  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/old-k8s-version-185624/client.crt: no such file or directory
E1031 19:08:40.047884  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/old-k8s-version-185624/client.crt: no such file or directory
E1031 19:08:40.368030  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/old-k8s-version-185624/client.crt: no such file or directory
E1031 19:08:41.008445  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/old-k8s-version-185624/client.crt: no such file or directory
E1031 19:08:41.293165  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/cilium-184455/client.crt: no such file or directory
E1031 19:08:42.289440  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/old-k8s-version-185624/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 21.016604683s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (21.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-57bbdc5f89-mwn6j" [756423a7-d89e-491c-878d-245368c8ca3d] Running
E1031 19:08:44.849787  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/old-k8s-version-185624/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.008498032s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-185811 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p default-k8s-diff-port-185811 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20221004-44d545d1
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.56s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-185811 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-185811 -n default-k8s-diff-port-185811
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-185811 -n default-k8s-diff-port-185811: exit status 2 (263.684971ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-185811 -n default-k8s-diff-port-185811
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-185811 -n default-k8s-diff-port-185811: exit status 2 (262.448413ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-185811 --alsologtostderr -v=1
E1031 19:08:49.969938  486314 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-478932/.minikube/profiles/old-k8s-version-185624/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-185811 -n default-k8s-diff-port-185811
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-185811 -n default-k8s-diff-port-185811
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.56s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p newest-cni-190640 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20221004-44d545d1
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.17s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-190640 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-190640 -n newest-cni-190640
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-190640 -n newest-cni-190640: exit status 2 (253.144285ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-190640 -n newest-cni-190640
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-190640 -n newest-cni-190640: exit status 2 (256.515262ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-190640 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-190640 -n newest-cni-190640
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-190640 -n newest-cni-190640
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.17s)

                                                
                                    

Test skip (31/296)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:156: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.25.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.25.3/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.25.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.25.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.25.3/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.25.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.25.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.25.3/kubectl
aaa_download_only_test.go:156: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.25.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:214: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:451: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:35: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:456: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:543: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:88: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:88: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:88: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:88: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:88: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:88: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:88: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Only test none driver.
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:291: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:91: Skipping the test as containerd container runtimes requires CNI
helpers_test.go:175: Cleaning up "kubenet-184454" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-184454
--- SKIP: TestNetworkPlugins/group/kubenet (0.25s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-185811" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-185811
--- SKIP: TestStartStop/group/disable-driver-mounts (0.22s)

                                                
                                    
Copied to clipboard