Test Report: KVM_Linux 15242

                    
                      45262e5daa3ddfe0c6cdcb881d2af1d3532e9ce3:2022-10-31:26351
                    
                

Test fail (3/306)

Order failed test Duration
197 TestMultiNode/serial/ValidateNameConflict 3.09
210 TestKubernetesUpgrade 172.14
314 TestNetworkPlugins/group/kubenet/HairPin 60.15
x
+
TestMultiNode/serial/ValidateNameConflict (3.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:441: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-175611
multinode_test.go:450: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-175611-m02 --driver=kvm2 
multinode_test.go:450: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-175611-m02 --driver=kvm2 : exit status 14 (85.277012ms)

                                                
                                                
-- stdout --
	* [multinode-175611-m02] minikube v1.27.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15242
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15242-42743/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15242-42743/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-175611-m02' is duplicated with machine name 'multinode-175611-m02' in profile 'multinode-175611'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:458: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-175611-m03 --driver=kvm2 
multinode_test.go:458: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-175611-m03 --driver=kvm2 : signal: killed (910.60649ms)

                                                
                                                
-- stdout --
	* [multinode-175611-m03] minikube v1.27.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15242
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15242-42743/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15242-42743/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Using the kvm2 driver based on user configuration
	* Starting control plane node multinode-175611-m03 in cluster multinode-175611-m03
	* Creating kvm2 VM (CPUs=2, Memory=6000MB, Disk=20000MB) ...

                                                
                                                
-- /stdout --
multinode_test.go:460: failed to start profile. args "out/minikube-linux-amd64 start -p multinode-175611-m03 --driver=kvm2 " : signal: killed
multinode_test.go:465: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-175611
multinode_test.go:465: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-175611: context deadline exceeded (941ns)
multinode_test.go:470: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-175611-m03
multinode_test.go:470: (dbg) Non-zero exit: out/minikube-linux-amd64 delete -p multinode-175611-m03: context deadline exceeded (115ns)
multinode_test.go:472: failed to clean temporary profile. args "out/minikube-linux-amd64 delete -p multinode-175611-m03" : context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-175611 -n multinode-175611
helpers_test.go:244: <<< TestMultiNode/serial/ValidateNameConflict FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/ValidateNameConflict]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-175611 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-175611 logs -n 25: (1.270552516s)
helpers_test.go:252: TestMultiNode/serial/ValidateNameConflict logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| cp      | multinode-175611 cp multinode-175611-m02:/home/docker/cp-test.txt                       | multinode-175611     | jenkins | v1.27.1 | 31 Oct 22 18:00 UTC | 31 Oct 22 18:00 UTC |
	|         | multinode-175611-m03:/home/docker/cp-test_multinode-175611-m02_multinode-175611-m03.txt |                      |         |         |                     |                     |
	| ssh     | multinode-175611 ssh -n                                                                 | multinode-175611     | jenkins | v1.27.1 | 31 Oct 22 18:00 UTC | 31 Oct 22 18:00 UTC |
	|         | multinode-175611-m02 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-175611 ssh -n multinode-175611-m03 sudo cat                                   | multinode-175611     | jenkins | v1.27.1 | 31 Oct 22 18:00 UTC | 31 Oct 22 18:00 UTC |
	|         | /home/docker/cp-test_multinode-175611-m02_multinode-175611-m03.txt                      |                      |         |         |                     |                     |
	| cp      | multinode-175611 cp testdata/cp-test.txt                                                | multinode-175611     | jenkins | v1.27.1 | 31 Oct 22 18:00 UTC | 31 Oct 22 18:00 UTC |
	|         | multinode-175611-m03:/home/docker/cp-test.txt                                           |                      |         |         |                     |                     |
	| ssh     | multinode-175611 ssh -n                                                                 | multinode-175611     | jenkins | v1.27.1 | 31 Oct 22 18:00 UTC | 31 Oct 22 18:00 UTC |
	|         | multinode-175611-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| cp      | multinode-175611 cp multinode-175611-m03:/home/docker/cp-test.txt                       | multinode-175611     | jenkins | v1.27.1 | 31 Oct 22 18:00 UTC | 31 Oct 22 18:00 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile3470561963/001/cp-test_multinode-175611-m03.txt         |                      |         |         |                     |                     |
	| ssh     | multinode-175611 ssh -n                                                                 | multinode-175611     | jenkins | v1.27.1 | 31 Oct 22 18:00 UTC | 31 Oct 22 18:00 UTC |
	|         | multinode-175611-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| cp      | multinode-175611 cp multinode-175611-m03:/home/docker/cp-test.txt                       | multinode-175611     | jenkins | v1.27.1 | 31 Oct 22 18:00 UTC | 31 Oct 22 18:00 UTC |
	|         | multinode-175611:/home/docker/cp-test_multinode-175611-m03_multinode-175611.txt         |                      |         |         |                     |                     |
	| ssh     | multinode-175611 ssh -n                                                                 | multinode-175611     | jenkins | v1.27.1 | 31 Oct 22 18:00 UTC | 31 Oct 22 18:00 UTC |
	|         | multinode-175611-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-175611 ssh -n multinode-175611 sudo cat                                       | multinode-175611     | jenkins | v1.27.1 | 31 Oct 22 18:00 UTC | 31 Oct 22 18:00 UTC |
	|         | /home/docker/cp-test_multinode-175611-m03_multinode-175611.txt                          |                      |         |         |                     |                     |
	| cp      | multinode-175611 cp multinode-175611-m03:/home/docker/cp-test.txt                       | multinode-175611     | jenkins | v1.27.1 | 31 Oct 22 18:00 UTC | 31 Oct 22 18:00 UTC |
	|         | multinode-175611-m02:/home/docker/cp-test_multinode-175611-m03_multinode-175611-m02.txt |                      |         |         |                     |                     |
	| ssh     | multinode-175611 ssh -n                                                                 | multinode-175611     | jenkins | v1.27.1 | 31 Oct 22 18:00 UTC | 31 Oct 22 18:00 UTC |
	|         | multinode-175611-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-175611 ssh -n multinode-175611-m02 sudo cat                                   | multinode-175611     | jenkins | v1.27.1 | 31 Oct 22 18:00 UTC | 31 Oct 22 18:00 UTC |
	|         | /home/docker/cp-test_multinode-175611-m03_multinode-175611-m02.txt                      |                      |         |         |                     |                     |
	| node    | multinode-175611 node stop m03                                                          | multinode-175611     | jenkins | v1.27.1 | 31 Oct 22 18:00 UTC | 31 Oct 22 18:00 UTC |
	| node    | multinode-175611 node start                                                             | multinode-175611     | jenkins | v1.27.1 | 31 Oct 22 18:00 UTC | 31 Oct 22 18:00 UTC |
	|         | m03 --alsologtostderr                                                                   |                      |         |         |                     |                     |
	| node    | list -p multinode-175611                                                                | multinode-175611     | jenkins | v1.27.1 | 31 Oct 22 18:00 UTC |                     |
	| stop    | -p multinode-175611                                                                     | multinode-175611     | jenkins | v1.27.1 | 31 Oct 22 18:00 UTC | 31 Oct 22 18:01 UTC |
	| start   | -p multinode-175611                                                                     | multinode-175611     | jenkins | v1.27.1 | 31 Oct 22 18:01 UTC | 31 Oct 22 18:15 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	| node    | list -p multinode-175611                                                                | multinode-175611     | jenkins | v1.27.1 | 31 Oct 22 18:15 UTC |                     |
	| node    | multinode-175611 node delete                                                            | multinode-175611     | jenkins | v1.27.1 | 31 Oct 22 18:15 UTC | 31 Oct 22 18:15 UTC |
	|         | m03                                                                                     |                      |         |         |                     |                     |
	| stop    | multinode-175611 stop                                                                   | multinode-175611     | jenkins | v1.27.1 | 31 Oct 22 18:15 UTC | 31 Oct 22 18:15 UTC |
	| start   | -p multinode-175611                                                                     | multinode-175611     | jenkins | v1.27.1 | 31 Oct 22 18:15 UTC | 31 Oct 22 18:26 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	| node    | list -p multinode-175611                                                                | multinode-175611     | jenkins | v1.27.1 | 31 Oct 22 18:26 UTC |                     |
	| start   | -p multinode-175611-m02                                                                 | multinode-175611-m02 | jenkins | v1.27.1 | 31 Oct 22 18:26 UTC |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	| start   | -p multinode-175611-m03                                                                 | multinode-175611-m03 | jenkins | v1.27.1 | 31 Oct 22 18:26 UTC |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/10/31 18:26:11
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.19.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1031 18:26:11.125694   62231 out.go:296] Setting OutFile to fd 1 ...
	I1031 18:26:11.125840   62231 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 18:26:11.125843   62231 out.go:309] Setting ErrFile to fd 2...
	I1031 18:26:11.125847   62231 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 18:26:11.125986   62231 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15242-42743/.minikube/bin
	I1031 18:26:11.126593   62231 out.go:303] Setting JSON to false
	I1031 18:26:11.127399   62231 start.go:116] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":7723,"bootTime":1667233048,"procs":192,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1021-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1031 18:26:11.127499   62231 start.go:126] virtualization: kvm guest
	I1031 18:26:11.129591   62231 out.go:177] * [multinode-175611-m03] minikube v1.27.1 on Ubuntu 20.04 (kvm/amd64)
	I1031 18:26:11.131374   62231 notify.go:220] Checking for updates...
	I1031 18:26:11.132960   62231 out.go:177]   - MINIKUBE_LOCATION=15242
	I1031 18:26:11.134329   62231 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1031 18:26:11.135825   62231 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/15242-42743/kubeconfig
	I1031 18:26:11.137154   62231 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/15242-42743/.minikube
	I1031 18:26:11.138538   62231 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1031 18:26:11.140081   62231 config.go:180] Loaded profile config "multinode-175611": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1031 18:26:11.140142   62231 driver.go:365] Setting default libvirt URI to qemu:///system
	I1031 18:26:11.181711   62231 out.go:177] * Using the kvm2 driver based on user configuration
	I1031 18:26:11.182945   62231 start.go:282] selected driver: kvm2
	I1031 18:26:11.182960   62231 start.go:808] validating driver "kvm2" against <nil>
	I1031 18:26:11.182986   62231 start.go:819] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1031 18:26:11.183258   62231 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1031 18:26:11.183443   62231 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/15242-42743/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1031 18:26:11.198074   62231 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.27.1
	I1031 18:26:11.198131   62231 start_flags.go:303] no existing cluster config was found, will generate one from the flags 
	I1031 18:26:11.198634   62231 start_flags.go:384] Using suggested 6000MB memory alloc based on sys=32101MB, container=0MB
	I1031 18:26:11.198759   62231 start_flags.go:870] Wait components to verify : map[apiserver:true system_pods:true]
	I1031 18:26:11.198787   62231 cni.go:95] Creating CNI manager for ""
	I1031 18:26:11.198801   62231 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I1031 18:26:11.198811   62231 start_flags.go:317] config:
	{Name:multinode-175611-m03 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:multinode-175611-m03 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunt
ime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1031 18:26:11.198922   62231 iso.go:124] acquiring lock: {Name:mk1b8df3d0e7e7151d07f634c55bc8cb360d70d6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1031 18:26:11.201025   62231 out.go:177] * Starting control plane node multinode-175611-m03 in cluster multinode-175611-m03
	I1031 18:26:11.202216   62231 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I1031 18:26:11.202251   62231 preload.go:148] Found local preload: /home/jenkins/minikube-integration/15242-42743/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4
	I1031 18:26:11.202262   62231 cache.go:57] Caching tarball of preloaded images
	I1031 18:26:11.202356   62231 preload.go:174] Found /home/jenkins/minikube-integration/15242-42743/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1031 18:26:11.202369   62231 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.3 on docker
	I1031 18:26:11.202465   62231 profile.go:148] Saving config to /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/multinode-175611-m03/config.json ...
	I1031 18:26:11.202476   62231 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/multinode-175611-m03/config.json: {Name:mka676e20c37fe0993654df25a2a4714bf7b01cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1031 18:26:11.202622   62231 cache.go:208] Successfully downloaded all kic artifacts
	I1031 18:26:11.202636   62231 start.go:364] acquiring machines lock for multinode-175611-m03: {Name:mk15de2cb0eed92cba3648c402e45ec73a1cbfb5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1031 18:26:11.202671   62231 start.go:368] acquired machines lock for "multinode-175611-m03" in 28.255µs
	I1031 18:26:11.202699   62231 start.go:93] Provisioning new machine with config: &{Name:multinode-175611-m03 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15159/minikube-v1.27.0-1666206003-15159-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernete
sConfig:{KubernetesVersion:v1.25.3 ClusterName:multinode-175611-m03 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet} &{Name: IP: Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1031 18:26:11.202761   62231 start.go:125] createHost starting for "" (driver="kvm2")
	
	* 
	* ==> Docker <==
	* -- Journal begins at Mon 2022-10-31 18:16:06 UTC, ends at Mon 2022-10-31 18:26:12 UTC. --
	Oct 31 18:16:33 multinode-175611 dockerd[844]: time="2022-10-31T18:16:33.569568807Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 31 18:16:33 multinode-175611 dockerd[844]: time="2022-10-31T18:16:33.569643013Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 31 18:16:33 multinode-175611 dockerd[844]: time="2022-10-31T18:16:33.569654813Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 31 18:16:33 multinode-175611 dockerd[844]: time="2022-10-31T18:16:33.569828587Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/2d6918e71bf991b6d201a8c88bae87bad4b090fdec69d97e29af6276ef71c233 pid=2027 runtime=io.containerd.runc.v2
	Oct 31 18:16:34 multinode-175611 dockerd[844]: time="2022-10-31T18:16:34.255655819Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 31 18:16:34 multinode-175611 dockerd[844]: time="2022-10-31T18:16:34.255706912Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 31 18:16:34 multinode-175611 dockerd[844]: time="2022-10-31T18:16:34.255777301Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 31 18:16:34 multinode-175611 dockerd[844]: time="2022-10-31T18:16:34.256047232Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/c500f01efc43d90cb0728771c356291670051403654ad79846920976ea208711 pid=2074 runtime=io.containerd.runc.v2
	Oct 31 18:16:36 multinode-175611 dockerd[844]: time="2022-10-31T18:16:36.718381119Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 31 18:16:36 multinode-175611 dockerd[844]: time="2022-10-31T18:16:36.718607357Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 31 18:16:36 multinode-175611 dockerd[844]: time="2022-10-31T18:16:36.718681931Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 31 18:16:36 multinode-175611 dockerd[844]: time="2022-10-31T18:16:36.719172818Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/06f4d64b0c51b7545b0bab9edafc9b81c589cf4cfe40f0979faec11a93a74712 pid=2258 runtime=io.containerd.runc.v2
	Oct 31 18:16:47 multinode-175611 dockerd[844]: time="2022-10-31T18:16:47.450236906Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 31 18:16:47 multinode-175611 dockerd[844]: time="2022-10-31T18:16:47.450934007Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 31 18:16:47 multinode-175611 dockerd[844]: time="2022-10-31T18:16:47.451113868Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 31 18:16:47 multinode-175611 dockerd[844]: time="2022-10-31T18:16:47.451949653Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/f1df0fd45577374129e3bc8d6158ebde90eb4b419f0a71152b9b66b4abb4b6a0 pid=2466 runtime=io.containerd.runc.v2
	Oct 31 18:17:04 multinode-175611 dockerd[844]: time="2022-10-31T18:17:04.423161014Z" level=info msg="shim disconnected" id=c500f01efc43d90cb0728771c356291670051403654ad79846920976ea208711
	Oct 31 18:17:04 multinode-175611 dockerd[838]: time="2022-10-31T18:17:04.423886476Z" level=info msg="ignoring event" container=c500f01efc43d90cb0728771c356291670051403654ad79846920976ea208711 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 31 18:17:04 multinode-175611 dockerd[844]: time="2022-10-31T18:17:04.424058881Z" level=warning msg="cleaning up after shim disconnected" id=c500f01efc43d90cb0728771c356291670051403654ad79846920976ea208711 namespace=moby
	Oct 31 18:17:04 multinode-175611 dockerd[844]: time="2022-10-31T18:17:04.424077565Z" level=info msg="cleaning up dead shim"
	Oct 31 18:17:04 multinode-175611 dockerd[844]: time="2022-10-31T18:17:04.445656734Z" level=warning msg="cleanup warnings time=\"2022-10-31T18:17:04Z\" level=info msg=\"starting signal loop\" namespace=moby pid=2740 runtime=io.containerd.runc.v2\n"
	Oct 31 18:17:19 multinode-175611 dockerd[844]: time="2022-10-31T18:17:19.453169163Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 31 18:17:19 multinode-175611 dockerd[844]: time="2022-10-31T18:17:19.453252233Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 31 18:17:19 multinode-175611 dockerd[844]: time="2022-10-31T18:17:19.453264462Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 31 18:17:19 multinode-175611 dockerd[844]: time="2022-10-31T18:17:19.453939274Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/c29420223e2af427d7a1e0b2cd29ca879ed6262518b1877175e1bfdf463be803 pid=2911 runtime=io.containerd.runc.v2
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID
	c29420223e2af       6e38f40d628db                                                                                         8 minutes ago       Running             storage-provisioner       3                   2d6918e71bf99
	f1df0fd455773       beaaf00edd38a                                                                                         9 minutes ago       Running             kube-proxy                2                   49acb80327f0e
	06f4d64b0c51b       d6e3e26021b60                                                                                         9 minutes ago       Running             kindnet-cni               2                   dced044e7ad56
	c500f01efc43d       6e38f40d628db                                                                                         9 minutes ago       Exited              storage-provisioner       2                   2d6918e71bf99
	d3789e2545d63       a8a176a5d5d69                                                                                         9 minutes ago       Running             etcd                      2                   90fb7923f89e2
	690e0b37aeaeb       6d23ec0e8b87e                                                                                         9 minutes ago       Running             kube-scheduler            2                   317f0ee11b3ce
	741b9d7665bbe       6039992312758                                                                                         9 minutes ago       Running             kube-controller-manager   2                   e6cab2effd357
	71635fe14f2af       0346dbd74bcb9                                                                                         9 minutes ago       Running             kube-apiserver            2                   8ab17f07cb066
	15236358fc30b       d6e3e26021b60                                                                                         24 minutes ago      Exited              kindnet-cni               1                   0bcd7f6da7d4e
	493b45ebbbc77       beaaf00edd38a                                                                                         24 minutes ago      Exited              kube-proxy                1                   793df2f45c039
	ed32bb110bbd0       6d23ec0e8b87e                                                                                         24 minutes ago      Exited              kube-scheduler            1                   7137cfe78d746
	be68f465191bc       a8a176a5d5d69                                                                                         24 minutes ago      Exited              etcd                      1                   0b7d435ff2606
	89bcd7b3aa70d       0346dbd74bcb9                                                                                         24 minutes ago      Exited              kube-apiserver            1                   671fdf79fe64a
	30d1c1171fc7f       6039992312758                                                                                         24 minutes ago      Exited              kube-controller-manager   1                   ffb10987f27d1
	8bc1bbb6d09a2       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   27 minutes ago      Exited              busybox                   0                   efb2f0b39793a
	67e65275be7a0       5185b96f0becf                                                                                         28 minutes ago      Exited              coredns                   0                   ea5ed99abc59c
	
	* 
	* ==> coredns [67e65275be7a] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 9a34f9264402cb585a9f45fa2022f72259f38c0069ff0551404dff6d373c3318d40dccb7d57503b326f0f19faa2110be407c171bae22df1ef9dd2930a017b6e6
	CoreDNS-1.9.3
	linux/amd64, go1.18.2, 45b0a11
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-175611
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-175611
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c34ec3182cacd96a3e168acffe335374d66b10cc
	                    minikube.k8s.io/name=multinode-175611
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_10_31T17_57_06_0700
	                    minikube.k8s.io/version=v1.27.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 31 Oct 2022 17:57:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-175611
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 31 Oct 2022 18:26:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 31 Oct 2022 18:22:28 +0000   Mon, 31 Oct 2022 17:56:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 31 Oct 2022 18:22:28 +0000   Mon, 31 Oct 2022 17:56:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 31 Oct 2022 18:22:28 +0000   Mon, 31 Oct 2022 17:56:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 31 Oct 2022 18:22:28 +0000   Mon, 31 Oct 2022 18:17:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.114
	  Hostname:    multinode-175611
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 54d005fa00074fba89f5cb22ed71372c
	  System UUID:                54d005fa-0007-4fba-89f5-cb22ed71372c
	  Boot ID:                    94ea7f4f-f699-430a-a63f-98f30f5d0f71
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.20
	  Kubelet Version:            v1.25.3
	  Kube-Proxy Version:         v1.25.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-65db55d5d6-m9bbn                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 coredns-565d847f94-vwsgh                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     28m
	  kube-system                 etcd-multinode-175611                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         29m
	  kube-system                 kindnet-89x2z                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      28m
	  kube-system                 kube-apiserver-multinode-175611             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-controller-manager-multinode-175611    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-proxy-tktj7                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-scheduler-multinode-175611             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 28m                    kube-proxy       
	  Normal  Starting                 9m25s                  kube-proxy       
	  Normal  Starting                 24m                    kube-proxy       
	  Normal  NodeHasNoDiskPressure    29m                    kubelet          Node multinode-175611 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  29m                    kubelet          Node multinode-175611 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     29m                    kubelet          Node multinode-175611 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  29m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 29m                    kubelet          Starting kubelet.
	  Normal  RegisteredNode           28m                    node-controller  Node multinode-175611 event: Registered Node multinode-175611 in Controller
	  Normal  NodeReady                28m                    kubelet          Node multinode-175611 status is now: NodeReady
	  Normal  NodeHasNoDiskPressure    24m (x8 over 24m)      kubelet          Node multinode-175611 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  24m (x8 over 24m)      kubelet          Node multinode-175611 status is now: NodeHasSufficientMemory
	  Normal  Starting                 24m                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     24m (x7 over 24m)      kubelet          Node multinode-175611 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  24m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           24m                    node-controller  Node multinode-175611 event: Registered Node multinode-175611 in Controller
	  Normal  Starting                 9m48s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m48s (x8 over 9m48s)  kubelet          Node multinode-175611 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m48s (x8 over 9m48s)  kubelet          Node multinode-175611 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m48s (x7 over 9m48s)  kubelet          Node multinode-175611 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m48s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           9m28s                  node-controller  Node multinode-175611 event: Registered Node multinode-175611 in Controller
	
	
	Name:               multinode-175611-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-175611-m02
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 31 Oct 2022 18:21:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-175611-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 31 Oct 2022 18:26:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 31 Oct 2022 18:22:08 +0000   Mon, 31 Oct 2022 18:21:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 31 Oct 2022 18:22:08 +0000   Mon, 31 Oct 2022 18:21:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 31 Oct 2022 18:22:08 +0000   Mon, 31 Oct 2022 18:21:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 31 Oct 2022 18:22:08 +0000   Mon, 31 Oct 2022 18:22:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.195
	  Hostname:    multinode-175611-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 a62f2376b3a1469c87b0b0be9ac1e409
	  System UUID:                a62f2376-b3a1-469c-87b0-b0be9ac1e409
	  Boot ID:                    47a6a210-f515-4570-81fd-c409aef4db6f
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.20
	  Kubelet Version:            v1.25.3
	  Kube-Proxy Version:         v1.25.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-65db55d5d6-p6579    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kindnet-9kfkh               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      27m
	  kube-system                 kube-proxy-x6h9n            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 19m                    kube-proxy  
	  Normal  Starting                 27m                    kube-proxy  
	  Normal  Starting                 4m11s                  kube-proxy  
	  Normal  NodeHasNoDiskPressure    27m (x8 over 27m)      kubelet     Node multinode-175611-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  27m (x8 over 27m)      kubelet     Node multinode-175611-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientMemory  19m (x2 over 19m)      kubelet     Node multinode-175611-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     19m (x2 over 19m)      kubelet     Node multinode-175611-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  19m                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    19m (x2 over 19m)      kubelet     Node multinode-175611-m02 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 19m                    kubelet     Starting kubelet.
	  Normal  NodeReady                19m                    kubelet     Node multinode-175611-m02 status is now: NodeReady
	  Normal  Starting                 4m14s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m14s (x2 over 4m14s)  kubelet     Node multinode-175611-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m14s (x2 over 4m14s)  kubelet     Node multinode-175611-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m14s (x2 over 4m14s)  kubelet     Node multinode-175611-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m14s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                4m4s                   kubelet     Node multinode-175611-m02 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [Oct31 18:16] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.066213] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +3.827409] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.311524] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.129931] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.359414] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000017] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.721937] systemd-fstab-generator[515]: Ignoring "noauto" for root device
	[  +0.091788] systemd-fstab-generator[526]: Ignoring "noauto" for root device
	[  +1.025114] systemd-fstab-generator[751]: Ignoring "noauto" for root device
	[  +0.281790] systemd-fstab-generator[807]: Ignoring "noauto" for root device
	[  +0.107080] systemd-fstab-generator[818]: Ignoring "noauto" for root device
	[  +0.100460] systemd-fstab-generator[829]: Ignoring "noauto" for root device
	[  +1.587888] systemd-fstab-generator[1008]: Ignoring "noauto" for root device
	[  +0.102961] systemd-fstab-generator[1019]: Ignoring "noauto" for root device
	[  +4.896741] systemd-fstab-generator[1219]: Ignoring "noauto" for root device
	[  +0.369088] kauditd_printk_skb: 67 callbacks suppressed
	[ +13.396834] kauditd_printk_skb: 8 callbacks suppressed
	
	* 
	* ==> etcd [be68f465191b] <==
	* {"level":"info","ts":"2022-10-31T18:01:34.248Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7df1350fafd42bce switched to configuration voters=(9075093065618959310)"}
	{"level":"info","ts":"2022-10-31T18:01:34.252Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"101f5850ef417740","local-member-id":"7df1350fafd42bce","added-peer-id":"7df1350fafd42bce","added-peer-peer-urls":["https://192.168.39.114:2380"]}
	{"level":"info","ts":"2022-10-31T18:01:34.252Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"101f5850ef417740","local-member-id":"7df1350fafd42bce","cluster-version":"3.5"}
	{"level":"info","ts":"2022-10-31T18:01:34.255Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-10-31T18:01:34.283Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-10-31T18:01:34.284Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.114:2380"}
	{"level":"info","ts":"2022-10-31T18:01:34.284Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.114:2380"}
	{"level":"info","ts":"2022-10-31T18:01:34.284Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"7df1350fafd42bce","initial-advertise-peer-urls":["https://192.168.39.114:2380"],"listen-peer-urls":["https://192.168.39.114:2380"],"advertise-client-urls":["https://192.168.39.114:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.114:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-10-31T18:01:34.284Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-10-31T18:01:35.585Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7df1350fafd42bce is starting a new election at term 2"}
	{"level":"info","ts":"2022-10-31T18:01:35.585Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7df1350fafd42bce became pre-candidate at term 2"}
	{"level":"info","ts":"2022-10-31T18:01:35.585Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7df1350fafd42bce received MsgPreVoteResp from 7df1350fafd42bce at term 2"}
	{"level":"info","ts":"2022-10-31T18:01:35.585Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7df1350fafd42bce became candidate at term 3"}
	{"level":"info","ts":"2022-10-31T18:01:35.585Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7df1350fafd42bce received MsgVoteResp from 7df1350fafd42bce at term 3"}
	{"level":"info","ts":"2022-10-31T18:01:35.585Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7df1350fafd42bce became leader at term 3"}
	{"level":"info","ts":"2022-10-31T18:01:35.585Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7df1350fafd42bce elected leader 7df1350fafd42bce at term 3"}
	{"level":"info","ts":"2022-10-31T18:01:35.586Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"7df1350fafd42bce","local-member-attributes":"{Name:multinode-175611 ClientURLs:[https://192.168.39.114:2379]}","request-path":"/0/members/7df1350fafd42bce/attributes","cluster-id":"101f5850ef417740","publish-timeout":"7s"}
	{"level":"info","ts":"2022-10-31T18:01:35.586Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-10-31T18:01:35.588Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-10-31T18:01:35.588Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-10-31T18:01:35.590Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.114:2379"}
	{"level":"info","ts":"2022-10-31T18:01:35.596Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-10-31T18:01:35.596Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-10-31T18:11:35.618Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1140}
	{"level":"info","ts":"2022-10-31T18:11:35.639Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":1140,"took":"20.643042ms"}
	
	* 
	* ==> etcd [d3789e2545d6] <==
	* {"level":"info","ts":"2022-10-31T18:16:27.883Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"7df1350fafd42bce","local-server-version":"3.5.4","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2022-10-31T18:16:27.884Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2022-10-31T18:16:27.918Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7df1350fafd42bce switched to configuration voters=(9075093065618959310)"}
	{"level":"info","ts":"2022-10-31T18:16:27.925Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"101f5850ef417740","local-member-id":"7df1350fafd42bce","added-peer-id":"7df1350fafd42bce","added-peer-peer-urls":["https://192.168.39.114:2380"]}
	{"level":"info","ts":"2022-10-31T18:16:27.928Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"101f5850ef417740","local-member-id":"7df1350fafd42bce","cluster-version":"3.5"}
	{"level":"info","ts":"2022-10-31T18:16:27.928Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-10-31T18:16:27.949Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-10-31T18:16:27.950Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"7df1350fafd42bce","initial-advertise-peer-urls":["https://192.168.39.114:2380"],"listen-peer-urls":["https://192.168.39.114:2380"],"advertise-client-urls":["https://192.168.39.114:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.114:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-10-31T18:16:27.950Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-10-31T18:16:27.953Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.114:2380"}
	{"level":"info","ts":"2022-10-31T18:16:27.953Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.114:2380"}
	{"level":"info","ts":"2022-10-31T18:16:29.031Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7df1350fafd42bce is starting a new election at term 3"}
	{"level":"info","ts":"2022-10-31T18:16:29.032Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7df1350fafd42bce became pre-candidate at term 3"}
	{"level":"info","ts":"2022-10-31T18:16:29.032Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7df1350fafd42bce received MsgPreVoteResp from 7df1350fafd42bce at term 3"}
	{"level":"info","ts":"2022-10-31T18:16:29.032Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7df1350fafd42bce became candidate at term 4"}
	{"level":"info","ts":"2022-10-31T18:16:29.032Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7df1350fafd42bce received MsgVoteResp from 7df1350fafd42bce at term 4"}
	{"level":"info","ts":"2022-10-31T18:16:29.032Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7df1350fafd42bce became leader at term 4"}
	{"level":"info","ts":"2022-10-31T18:16:29.032Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7df1350fafd42bce elected leader 7df1350fafd42bce at term 4"}
	{"level":"info","ts":"2022-10-31T18:16:29.032Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"7df1350fafd42bce","local-member-attributes":"{Name:multinode-175611 ClientURLs:[https://192.168.39.114:2379]}","request-path":"/0/members/7df1350fafd42bce/attributes","cluster-id":"101f5850ef417740","publish-timeout":"7s"}
	{"level":"info","ts":"2022-10-31T18:16:29.032Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-10-31T18:16:29.033Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-10-31T18:16:29.034Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.114:2379"}
	{"level":"info","ts":"2022-10-31T18:16:29.034Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-10-31T18:16:29.034Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-10-31T18:16:29.034Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	
	* 
	* ==> kernel <==
	*  18:26:13 up 10 min,  0 users,  load average: 0.72, 0.30, 0.14
	Linux multinode-175611 5.10.57 #1 SMP Wed Oct 19 23:03:20 UTC 2022 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [71635fe14f2a] <==
	* I1031 18:16:31.228185       1 establishing_controller.go:76] Starting EstablishingController
	I1031 18:16:31.228360       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I1031 18:16:31.228440       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I1031 18:16:31.228535       1 crd_finalizer.go:266] Starting CRDFinalizer
	I1031 18:16:31.264117       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1031 18:16:31.274608       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1031 18:16:31.275930       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I1031 18:16:31.276089       1 shared_informer.go:255] Waiting for caches to sync for crd-autoregister
	E1031 18:16:31.326518       1 controller.go:159] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I1031 18:16:31.361161       1 shared_informer.go:262] Caches are synced for node_authorizer
	I1031 18:16:31.376541       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I1031 18:16:31.408132       1 controller.go:616] quota admission added evaluator for: leases.coordination.k8s.io
	I1031 18:16:31.419307       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1031 18:16:31.421238       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I1031 18:16:31.422782       1 cache.go:39] Caches are synced for autoregister controller
	I1031 18:16:31.424171       1 apf_controller.go:305] Running API Priority and Fairness config worker
	I1031 18:16:31.424530       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1031 18:16:31.963029       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1031 18:16:32.222238       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1031 18:16:33.833921       1 controller.go:616] quota admission added evaluator for: daemonsets.apps
	I1031 18:16:33.979627       1 controller.go:616] quota admission added evaluator for: serviceaccounts
	I1031 18:16:33.993941       1 controller.go:616] quota admission added evaluator for: deployments.apps
	I1031 18:16:34.055639       1 controller.go:616] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1031 18:16:34.062965       1 controller.go:616] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1031 18:17:36.952169       1 controller.go:616] quota admission added evaluator for: endpoints
	
	* 
	* ==> kube-apiserver [89bcd7b3aa70] <==
	* I1031 18:01:37.727304       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I1031 18:01:37.727317       1 crd_finalizer.go:266] Starting CRDFinalizer
	I1031 18:01:37.730574       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1031 18:01:37.732562       1 controller.go:80] Starting OpenAPI V3 AggregationController
	I1031 18:01:37.732743       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1031 18:01:37.733624       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I1031 18:01:37.761616       1 shared_informer.go:255] Waiting for caches to sync for crd-autoregister
	I1031 18:01:37.844108       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1031 18:01:37.846638       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1031 18:01:37.849496       1 cache.go:39] Caches are synced for autoregister controller
	E1031 18:01:37.851222       1 controller.go:159] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I1031 18:01:37.852188       1 shared_informer.go:262] Caches are synced for node_authorizer
	I1031 18:01:37.864521       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I1031 18:01:37.877930       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I1031 18:01:37.878359       1 apf_controller.go:305] Running API Priority and Fairness config worker
	I1031 18:01:37.894700       1 controller.go:616] quota admission added evaluator for: leases.coordination.k8s.io
	I1031 18:01:38.470876       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1031 18:01:38.732300       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1031 18:01:40.811411       1 controller.go:616] quota admission added evaluator for: daemonsets.apps
	I1031 18:01:40.919924       1 controller.go:616] quota admission added evaluator for: serviceaccounts
	I1031 18:01:40.930234       1 controller.go:616] quota admission added evaluator for: deployments.apps
	I1031 18:01:40.989709       1 controller.go:616] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1031 18:01:40.996301       1 controller.go:616] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1031 18:01:50.866727       1 controller.go:616] quota admission added evaluator for: endpoints
	I1031 18:01:50.894896       1 controller.go:616] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	* 
	* ==> kube-controller-manager [30d1c1171fc7] <==
	* I1031 18:01:51.324385       1 shared_informer.go:262] Caches are synced for garbage collector
	I1031 18:01:51.324429       1 garbagecollector.go:163] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I1031 18:01:51.329724       1 shared_informer.go:262] Caches are synced for garbage collector
	W1031 18:02:30.992058       1 topologycache.go:199] Can't get CPU or zone information for multinode-175611-m02 node
	I1031 18:02:30.993896       1 event.go:294] "Event occurred" object="multinode-175611-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-175611-m03 status is now: NodeNotReady"
	I1031 18:02:31.004158       1 event.go:294] "Event occurred" object="kube-system/kindnet-svfcl" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I1031 18:02:31.013439       1 event.go:294] "Event occurred" object="kube-system/kube-proxy-4xkjz" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I1031 18:02:31.023916       1 event.go:294] "Event occurred" object="multinode-175611-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-175611-m02 status is now: NodeNotReady"
	I1031 18:02:31.037642       1 event.go:294] "Event occurred" object="kube-system/kube-proxy-x6h9n" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I1031 18:02:31.051761       1 event.go:294] "Event occurred" object="default/busybox-65db55d5d6-p6579" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I1031 18:02:31.059691       1 event.go:294] "Event occurred" object="kube-system/kindnet-9kfkh" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I1031 18:06:23.738585       1 event.go:294] "Event occurred" object="default/busybox-65db55d5d6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-65db55d5d6-7ch9q"
	W1031 18:06:27.730619       1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="multinode-175611-m02" does not exist
	I1031 18:06:27.732826       1 event.go:294] "Event occurred" object="default/busybox-65db55d5d6-p6579" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-65db55d5d6-p6579"
	I1031 18:06:27.744306       1 range_allocator.go:367] Set node multinode-175611-m02 PodCIDR to [10.244.1.0/24]
	W1031 18:06:38.164414       1 topologycache.go:199] Can't get CPU or zone information for multinode-175611-m02 node
	I1031 18:06:41.110286       1 event.go:294] "Event occurred" object="default/busybox-65db55d5d6-p6579" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-65db55d5d6-p6579"
	W1031 18:11:03.119068       1 topologycache.go:199] Can't get CPU or zone information for multinode-175611-m02 node
	W1031 18:11:03.937259       1 topologycache.go:199] Can't get CPU or zone information for multinode-175611-m02 node
	W1031 18:11:03.938173       1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="multinode-175611-m03" does not exist
	I1031 18:11:03.947759       1 range_allocator.go:367] Set node multinode-175611-m03 PodCIDR to [10.244.2.0/24]
	W1031 18:11:44.982759       1 topologycache.go:199] Can't get CPU or zone information for multinode-175611-m03 node
	I1031 18:11:46.168801       1 event.go:294] "Event occurred" object="default/busybox-65db55d5d6-7ch9q" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-65db55d5d6-7ch9q"
	I1031 18:15:47.691901       1 event.go:294] "Event occurred" object="default/busybox-65db55d5d6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-65db55d5d6-hs5pp"
	W1031 18:15:49.696196       1 topologycache.go:199] Can't get CPU or zone information for multinode-175611-m02 node
	
	* 
	* ==> kube-controller-manager [741b9d7665bb] <==
	* I1031 18:16:44.252381       1 shared_informer.go:262] Caches are synced for persistent volume
	I1031 18:16:44.267417       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I1031 18:16:44.331801       1 shared_informer.go:262] Caches are synced for resource quota
	I1031 18:16:44.354142       1 shared_informer.go:262] Caches are synced for ClusterRoleAggregator
	I1031 18:16:44.377924       1 shared_informer.go:262] Caches are synced for HPA
	I1031 18:16:44.383901       1 shared_informer.go:262] Caches are synced for resource quota
	I1031 18:16:44.771908       1 shared_informer.go:262] Caches are synced for garbage collector
	I1031 18:16:44.811025       1 shared_informer.go:262] Caches are synced for garbage collector
	I1031 18:16:44.811068       1 garbagecollector.go:163] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	W1031 18:17:22.616402       1 topologycache.go:199] Can't get CPU or zone information for multinode-175611-m02 node
	I1031 18:17:24.210086       1 event.go:294] "Event occurred" object="multinode-175611-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-175611-m02 status is now: NodeNotReady"
	I1031 18:17:24.229198       1 event.go:294] "Event occurred" object="kube-system/kube-proxy-x6h9n" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I1031 18:17:24.246574       1 gc_controller.go:324] "PodGC is force deleting Pod" pod="kube-system/kindnet-svfcl"
	I1031 18:17:24.259845       1 event.go:294] "Event occurred" object="kube-system/kindnet-9kfkh" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I1031 18:17:24.284179       1 gc_controller.go:252] "Forced deletion of orphaned Pod succeeded" pod="kube-system/kindnet-svfcl"
	I1031 18:17:24.284196       1 gc_controller.go:324] "PodGC is force deleting Pod" pod="kube-system/kube-proxy-4xkjz"
	I1031 18:17:24.293680       1 event.go:294] "Event occurred" object="kube-system/coredns-565d847f94-vwsgh" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod kube-system/coredns-565d847f94-vwsgh"
	I1031 18:17:24.293849       1 event.go:294] "Event occurred" object="kube-system/storage-provisioner" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod kube-system/storage-provisioner"
	I1031 18:17:24.293870       1 event.go:294] "Event occurred" object="default/busybox-65db55d5d6-m9bbn" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-65db55d5d6-m9bbn"
	I1031 18:17:24.309642       1 gc_controller.go:252] "Forced deletion of orphaned Pod succeeded" pod="kube-system/kube-proxy-4xkjz"
	I1031 18:21:58.486818       1 event.go:294] "Event occurred" object="default/busybox-65db55d5d6-p6579" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-65db55d5d6-p6579"
	W1031 18:21:58.487019       1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="multinode-175611-m02" does not exist
	I1031 18:21:58.501070       1 range_allocator.go:367] Set node multinode-175611-m02 PodCIDR to [10.244.1.0/24]
	W1031 18:22:08.581567       1 topologycache.go:199] Can't get CPU or zone information for multinode-175611-m02 node
	I1031 18:22:09.342543       1 event.go:294] "Event occurred" object="default/busybox-65db55d5d6-p6579" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-65db55d5d6-p6579"
	
	* 
	* ==> kube-proxy [493b45ebbbc7] <==
	* I1031 18:01:39.524498       1 node.go:163] Successfully retrieved node IP: 192.168.39.114
	I1031 18:01:39.524684       1 server_others.go:138] "Detected node IP" address="192.168.39.114"
	I1031 18:01:39.524782       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I1031 18:01:39.597571       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I1031 18:01:39.597685       1 server_others.go:206] "Using iptables Proxier"
	I1031 18:01:39.598344       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I1031 18:01:39.602463       1 server.go:661] "Version info" version="v1.25.3"
	I1031 18:01:39.602701       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1031 18:01:39.608895       1 config.go:317] "Starting service config controller"
	I1031 18:01:39.610219       1 shared_informer.go:255] Waiting for caches to sync for service config
	I1031 18:01:39.610329       1 config.go:226] "Starting endpoint slice config controller"
	I1031 18:01:39.610410       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I1031 18:01:39.613608       1 config.go:444] "Starting node config controller"
	I1031 18:01:39.613754       1 shared_informer.go:255] Waiting for caches to sync for node config
	I1031 18:01:39.711753       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I1031 18:01:39.711839       1 shared_informer.go:262] Caches are synced for service config
	I1031 18:01:39.715047       1 shared_informer.go:262] Caches are synced for node config
	
	* 
	* ==> kube-proxy [f1df0fd45577] <==
	* I1031 18:16:47.641529       1 node.go:163] Successfully retrieved node IP: 192.168.39.114
	I1031 18:16:47.641615       1 server_others.go:138] "Detected node IP" address="192.168.39.114"
	I1031 18:16:47.641636       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I1031 18:16:47.672052       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I1031 18:16:47.672088       1 server_others.go:206] "Using iptables Proxier"
	I1031 18:16:47.673082       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I1031 18:16:47.673617       1 server.go:661] "Version info" version="v1.25.3"
	I1031 18:16:47.673652       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1031 18:16:47.676159       1 config.go:317] "Starting service config controller"
	I1031 18:16:47.676199       1 shared_informer.go:255] Waiting for caches to sync for service config
	I1031 18:16:47.676995       1 config.go:226] "Starting endpoint slice config controller"
	I1031 18:16:47.677032       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I1031 18:16:47.684020       1 config.go:444] "Starting node config controller"
	I1031 18:16:47.684053       1 shared_informer.go:255] Waiting for caches to sync for node config
	I1031 18:16:47.777435       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I1031 18:16:47.777540       1 shared_informer.go:262] Caches are synced for service config
	I1031 18:16:47.784131       1 shared_informer.go:262] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [690e0b37aeae] <==
	* I1031 18:16:28.638417       1 serving.go:348] Generated self-signed cert in-memory
	W1031 18:16:31.299900       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1031 18:16:31.300178       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1031 18:16:31.300220       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1031 18:16:31.300379       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1031 18:16:31.331956       1 server.go:148] "Starting Kubernetes Scheduler" version="v1.25.3"
	I1031 18:16:31.331995       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1031 18:16:31.339598       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1031 18:16:31.341151       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1031 18:16:31.341632       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1031 18:16:31.341781       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1031 18:16:31.443155       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kube-scheduler [ed32bb110bbd] <==
	* I1031 18:01:34.529605       1 serving.go:348] Generated self-signed cert in-memory
	W1031 18:01:37.777963       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1031 18:01:37.778458       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1031 18:01:37.778689       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1031 18:01:37.778718       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1031 18:01:37.818378       1 server.go:148] "Starting Kubernetes Scheduler" version="v1.25.3"
	I1031 18:01:37.818419       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1031 18:01:37.827777       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1031 18:01:37.834438       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1031 18:01:37.834480       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1031 18:01:37.834750       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1031 18:01:37.936126       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Mon 2022-10-31 18:16:06 UTC, ends at Mon 2022-10-31 18:26:13 UTC. --
	Oct 31 18:25:31 multinode-175611 kubelet[1225]: E1031 18:25:31.378587    1225 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"31aead9d-cbbe-45a7-9552-aa7dc7128d67\" with KillPodSandboxError: \"rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \\\"busybox-65db55d5d6-m9bbn_default\\\" network: could not retrieve port mappings: key is not found\"" pod="default/busybox-65db55d5d6-m9bbn" podUID=31aead9d-cbbe-45a7-9552-aa7dc7128d67
	Oct 31 18:25:41 multinode-175611 kubelet[1225]: E1031 18:25:41.377986    1225 remote_runtime.go:269] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \"coredns-565d847f94-vwsgh_kube-system\" network: could not retrieve port mappings: key is not found" podSandboxID="ea5ed99abc59c0af6196b340f7b5f4d97cd220501df0aa3bb253ea364c2a788b"
	Oct 31 18:25:41 multinode-175611 kubelet[1225]: E1031 18:25:41.378284    1225 kuberuntime_manager.go:954] "Failed to stop sandbox" podSandboxID={Type:docker ID:ea5ed99abc59c0af6196b340f7b5f4d97cd220501df0aa3bb253ea364c2a788b}
	Oct 31 18:25:41 multinode-175611 kubelet[1225]: E1031 18:25:41.378350    1225 kuberuntime_manager.go:695] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"43c956b8-aa61-43e5-b432-f59ccdffde38\" with KillPodSandboxError: \"rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \\\"coredns-565d847f94-vwsgh_kube-system\\\" network: could not retrieve port mappings: key is not found\""
	Oct 31 18:25:41 multinode-175611 kubelet[1225]: E1031 18:25:41.378437    1225 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"43c956b8-aa61-43e5-b432-f59ccdffde38\" with KillPodSandboxError: \"rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \\\"coredns-565d847f94-vwsgh_kube-system\\\" network: could not retrieve port mappings: key is not found\"" pod="kube-system/coredns-565d847f94-vwsgh" podUID=43c956b8-aa61-43e5-b432-f59ccdffde38
	Oct 31 18:25:42 multinode-175611 kubelet[1225]: E1031 18:25:42.378571    1225 remote_runtime.go:269] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \"busybox-65db55d5d6-m9bbn_default\" network: could not retrieve port mappings: key is not found" podSandboxID="efb2f0b39793a85541f6c0a40788a452206ba6f1b1d306c4e3b9f3e4e6991f87"
	Oct 31 18:25:42 multinode-175611 kubelet[1225]: E1031 18:25:42.378607    1225 kuberuntime_manager.go:954] "Failed to stop sandbox" podSandboxID={Type:docker ID:efb2f0b39793a85541f6c0a40788a452206ba6f1b1d306c4e3b9f3e4e6991f87}
	Oct 31 18:25:42 multinode-175611 kubelet[1225]: E1031 18:25:42.378634    1225 kuberuntime_manager.go:695] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"31aead9d-cbbe-45a7-9552-aa7dc7128d67\" with KillPodSandboxError: \"rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \\\"busybox-65db55d5d6-m9bbn_default\\\" network: could not retrieve port mappings: key is not found\""
	Oct 31 18:25:42 multinode-175611 kubelet[1225]: E1031 18:25:42.378656    1225 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"31aead9d-cbbe-45a7-9552-aa7dc7128d67\" with KillPodSandboxError: \"rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \\\"busybox-65db55d5d6-m9bbn_default\\\" network: could not retrieve port mappings: key is not found\"" pod="default/busybox-65db55d5d6-m9bbn" podUID=31aead9d-cbbe-45a7-9552-aa7dc7128d67
	Oct 31 18:25:52 multinode-175611 kubelet[1225]: E1031 18:25:52.378905    1225 remote_runtime.go:269] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \"coredns-565d847f94-vwsgh_kube-system\" network: could not retrieve port mappings: key is not found" podSandboxID="ea5ed99abc59c0af6196b340f7b5f4d97cd220501df0aa3bb253ea364c2a788b"
	Oct 31 18:25:52 multinode-175611 kubelet[1225]: E1031 18:25:52.378972    1225 kuberuntime_manager.go:954] "Failed to stop sandbox" podSandboxID={Type:docker ID:ea5ed99abc59c0af6196b340f7b5f4d97cd220501df0aa3bb253ea364c2a788b}
	Oct 31 18:25:52 multinode-175611 kubelet[1225]: E1031 18:25:52.379006    1225 kuberuntime_manager.go:695] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"43c956b8-aa61-43e5-b432-f59ccdffde38\" with KillPodSandboxError: \"rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \\\"coredns-565d847f94-vwsgh_kube-system\\\" network: could not retrieve port mappings: key is not found\""
	Oct 31 18:25:52 multinode-175611 kubelet[1225]: E1031 18:25:52.379027    1225 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"43c956b8-aa61-43e5-b432-f59ccdffde38\" with KillPodSandboxError: \"rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \\\"coredns-565d847f94-vwsgh_kube-system\\\" network: could not retrieve port mappings: key is not found\"" pod="kube-system/coredns-565d847f94-vwsgh" podUID=43c956b8-aa61-43e5-b432-f59ccdffde38
	Oct 31 18:25:56 multinode-175611 kubelet[1225]: E1031 18:25:56.377162    1225 remote_runtime.go:269] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \"busybox-65db55d5d6-m9bbn_default\" network: could not retrieve port mappings: key is not found" podSandboxID="efb2f0b39793a85541f6c0a40788a452206ba6f1b1d306c4e3b9f3e4e6991f87"
	Oct 31 18:25:56 multinode-175611 kubelet[1225]: E1031 18:25:56.377502    1225 kuberuntime_manager.go:954] "Failed to stop sandbox" podSandboxID={Type:docker ID:efb2f0b39793a85541f6c0a40788a452206ba6f1b1d306c4e3b9f3e4e6991f87}
	Oct 31 18:25:56 multinode-175611 kubelet[1225]: E1031 18:25:56.377577    1225 kuberuntime_manager.go:695] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"31aead9d-cbbe-45a7-9552-aa7dc7128d67\" with KillPodSandboxError: \"rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \\\"busybox-65db55d5d6-m9bbn_default\\\" network: could not retrieve port mappings: key is not found\""
	Oct 31 18:25:56 multinode-175611 kubelet[1225]: E1031 18:25:56.377632    1225 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"31aead9d-cbbe-45a7-9552-aa7dc7128d67\" with KillPodSandboxError: \"rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \\\"busybox-65db55d5d6-m9bbn_default\\\" network: could not retrieve port mappings: key is not found\"" pod="default/busybox-65db55d5d6-m9bbn" podUID=31aead9d-cbbe-45a7-9552-aa7dc7128d67
	Oct 31 18:26:07 multinode-175611 kubelet[1225]: E1031 18:26:07.378267    1225 remote_runtime.go:269] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \"coredns-565d847f94-vwsgh_kube-system\" network: could not retrieve port mappings: key is not found" podSandboxID="ea5ed99abc59c0af6196b340f7b5f4d97cd220501df0aa3bb253ea364c2a788b"
	Oct 31 18:26:07 multinode-175611 kubelet[1225]: E1031 18:26:07.378325    1225 kuberuntime_manager.go:954] "Failed to stop sandbox" podSandboxID={Type:docker ID:ea5ed99abc59c0af6196b340f7b5f4d97cd220501df0aa3bb253ea364c2a788b}
	Oct 31 18:26:07 multinode-175611 kubelet[1225]: E1031 18:26:07.378360    1225 kuberuntime_manager.go:695] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"43c956b8-aa61-43e5-b432-f59ccdffde38\" with KillPodSandboxError: \"rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \\\"coredns-565d847f94-vwsgh_kube-system\\\" network: could not retrieve port mappings: key is not found\""
	Oct 31 18:26:07 multinode-175611 kubelet[1225]: E1031 18:26:07.378385    1225 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"43c956b8-aa61-43e5-b432-f59ccdffde38\" with KillPodSandboxError: \"rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \\\"coredns-565d847f94-vwsgh_kube-system\\\" network: could not retrieve port mappings: key is not found\"" pod="kube-system/coredns-565d847f94-vwsgh" podUID=43c956b8-aa61-43e5-b432-f59ccdffde38
	Oct 31 18:26:10 multinode-175611 kubelet[1225]: E1031 18:26:10.379782    1225 remote_runtime.go:269] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \"busybox-65db55d5d6-m9bbn_default\" network: could not retrieve port mappings: key is not found" podSandboxID="efb2f0b39793a85541f6c0a40788a452206ba6f1b1d306c4e3b9f3e4e6991f87"
	Oct 31 18:26:10 multinode-175611 kubelet[1225]: E1031 18:26:10.379858    1225 kuberuntime_manager.go:954] "Failed to stop sandbox" podSandboxID={Type:docker ID:efb2f0b39793a85541f6c0a40788a452206ba6f1b1d306c4e3b9f3e4e6991f87}
	Oct 31 18:26:10 multinode-175611 kubelet[1225]: E1031 18:26:10.379897    1225 kuberuntime_manager.go:695] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"31aead9d-cbbe-45a7-9552-aa7dc7128d67\" with KillPodSandboxError: \"rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \\\"busybox-65db55d5d6-m9bbn_default\\\" network: could not retrieve port mappings: key is not found\""
	Oct 31 18:26:10 multinode-175611 kubelet[1225]: E1031 18:26:10.379920    1225 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"31aead9d-cbbe-45a7-9552-aa7dc7128d67\" with KillPodSandboxError: \"rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \\\"busybox-65db55d5d6-m9bbn_default\\\" network: could not retrieve port mappings: key is not found\"" pod="default/busybox-65db55d5d6-m9bbn" podUID=31aead9d-cbbe-45a7-9552-aa7dc7128d67
	
	* 
	* ==> storage-provisioner [c29420223e2a] <==
	* I1031 18:17:19.534824       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1031 18:17:19.557432       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1031 18:17:19.557824       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1031 18:17:36.954508       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1031 18:17:36.970840       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_multinode-175611_e0a6ea65-758b-4cc3-8b06-820aaeda49ab!
	I1031 18:17:36.991922       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"dea15414-5b61-4984-9310-a6530f2c62a2", APIVersion:"v1", ResourceVersion:"1917", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' multinode-175611_e0a6ea65-758b-4cc3-8b06-820aaeda49ab became leader
	I1031 18:17:37.110055       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_multinode-175611_e0a6ea65-758b-4cc3-8b06-820aaeda49ab!
	
	* 
	* ==> storage-provisioner [c500f01efc43] <==
	* I1031 18:16:34.376044       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1031 18:17:04.399354       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-175611 -n multinode-175611
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-175611 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: busybox-65db55d5d6-hs5pp
helpers_test.go:272: ======> post-mortem[TestMultiNode/serial/ValidateNameConflict]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context multinode-175611 describe pod busybox-65db55d5d6-hs5pp
helpers_test.go:280: (dbg) kubectl --context multinode-175611 describe pod busybox-65db55d5d6-hs5pp:

                                                
                                                
-- stdout --
	Name:             busybox-65db55d5d6-hs5pp
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=65db55d5d6
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-65db55d5d6
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qkqsf (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-qkqsf:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                    From               Message
	  ----     ------            ----                   ----               -------
	  Warning  FailedScheduling  10m                    default-scheduler  0/3 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/unschedulable: }, 1 node(s) were unschedulable, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  10m                    default-scheduler  0/3 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/unschedulable: }, 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  8m47s (x2 over 8m49s)  default-scheduler  0/2 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/unreachable: }, 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 1 No preemption victims found for incoming pod, 1 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  4m5s (x2 over 9m42s)   default-scheduler  0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 No preemption victims found for incoming pod.

                                                
                                                
-- /stdout --
helpers_test.go:283: <<< TestMultiNode/serial/ValidateNameConflict FAILED: end of post-mortem logs <<<
helpers_test.go:284: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/ValidateNameConflict (3.09s)

                                                
                                    
x
+
TestKubernetesUpgrade (172.14s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:229: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-183258 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2 
E1031 18:33:21.939795   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/functional-174543/client.crt: no such file or directory
version_upgrade_test.go:229: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-183258 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2 : (1m6.109705577s)
version_upgrade_test.go:234: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-183258
version_upgrade_test.go:234: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-183258: (3.114239398s)
version_upgrade_test.go:239: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-183258 status --format={{.Host}}
version_upgrade_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-183258 status --format={{.Host}}: exit status 7 (109.092508ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:241: status error: exit status 7 (may be ok)
version_upgrade_test.go:250: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-183258 --memory=2200 --kubernetes-version=v1.25.3 --alsologtostderr -v=1 --driver=kvm2 

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:250: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-183258 --memory=2200 --kubernetes-version=v1.25.3 --alsologtostderr -v=1 --driver=kvm2 : (1m23.898723187s)
version_upgrade_test.go:255: (dbg) Run:  kubectl --context kubernetes-upgrade-183258 version --output=json
version_upgrade_test.go:274: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:276: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-183258 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2 
version_upgrade_test.go:276: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-183258 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2 : exit status 106 (200.206827ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-183258] minikube v1.27.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15242
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15242-42743/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15242-42743/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.25.3 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-183258
	    minikube start -p kubernetes-upgrade-183258 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-1832582 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.25.3, by running:
	    
	    minikube start -p kubernetes-upgrade-183258 --kubernetes-version=v1.25.3
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:280: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:282: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-183258 --memory=2200 --kubernetes-version=v1.25.3 --alsologtostderr -v=1 --driver=kvm2 

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:282: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-183258 --memory=2200 --kubernetes-version=v1.25.3 --alsologtostderr -v=1 --driver=kvm2 : exit status 90 (14.868325477s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-183258] minikube v1.27.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15242
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15242-42743/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15242-42743/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Using the kvm2 driver based on existing profile
	* Starting control plane node kubernetes-upgrade-183258 in cluster kubernetes-upgrade-183258
	* Updating the running kvm2 "kubernetes-upgrade-183258" VM ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1031 18:35:31.911476   66096 out.go:296] Setting OutFile to fd 1 ...
	I1031 18:35:31.911621   66096 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 18:35:31.911640   66096 out.go:309] Setting ErrFile to fd 2...
	I1031 18:35:31.911647   66096 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 18:35:31.911854   66096 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15242-42743/.minikube/bin
	I1031 18:35:31.912610   66096 out.go:303] Setting JSON to false
	I1031 18:35:31.913777   66096 start.go:116] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":8284,"bootTime":1667233048,"procs":243,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1021-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1031 18:35:31.913915   66096 start.go:126] virtualization: kvm guest
	I1031 18:35:32.001900   66096 out.go:177] * [kubernetes-upgrade-183258] minikube v1.27.1 on Ubuntu 20.04 (kvm/amd64)
	I1031 18:35:32.003561   66096 notify.go:220] Checking for updates...
	I1031 18:35:32.004947   66096 out.go:177]   - MINIKUBE_LOCATION=15242
	I1031 18:35:32.006845   66096 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1031 18:35:32.008281   66096 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/15242-42743/kubeconfig
	I1031 18:35:32.009646   66096 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/15242-42743/.minikube
	I1031 18:35:32.010900   66096 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1031 18:35:32.012759   66096 config.go:180] Loaded profile config "kubernetes-upgrade-183258": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1031 18:35:32.013322   66096 main.go:134] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1031 18:35:32.013375   66096 main.go:134] libmachine: Launching plugin server for driver kvm2
	I1031 18:35:32.034270   66096 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:42039
	I1031 18:35:32.034740   66096 main.go:134] libmachine: () Calling .GetVersion
	I1031 18:35:32.035339   66096 main.go:134] libmachine: Using API Version  1
	I1031 18:35:32.035366   66096 main.go:134] libmachine: () Calling .SetConfigRaw
	I1031 18:35:32.035783   66096 main.go:134] libmachine: () Calling .GetMachineName
	I1031 18:35:32.035982   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) Calling .DriverName
	I1031 18:35:32.036274   66096 driver.go:365] Setting default libvirt URI to qemu:///system
	I1031 18:35:32.036610   66096 main.go:134] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1031 18:35:32.036657   66096 main.go:134] libmachine: Launching plugin server for driver kvm2
	I1031 18:35:32.060713   66096 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:46047
	I1031 18:35:32.061268   66096 main.go:134] libmachine: () Calling .GetVersion
	I1031 18:35:32.061831   66096 main.go:134] libmachine: Using API Version  1
	I1031 18:35:32.061854   66096 main.go:134] libmachine: () Calling .SetConfigRaw
	I1031 18:35:32.062339   66096 main.go:134] libmachine: () Calling .GetMachineName
	I1031 18:35:32.062516   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) Calling .DriverName
	I1031 18:35:32.128422   66096 out.go:177] * Using the kvm2 driver based on existing profile
	I1031 18:35:32.130977   66096 start.go:282] selected driver: kvm2
	I1031 18:35:32.131008   66096 start.go:808] validating driver "kvm2" against &{Name:kubernetes-upgrade-183258 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15159/minikube-v1.27.0-1666206003-15159-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernet
esConfig:{KubernetesVersion:v1.25.3 ClusterName:kubernetes-upgrade-183258 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.232 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false
olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1031 18:35:32.131192   66096 start.go:819] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1031 18:35:32.132368   66096 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1031 18:35:32.132562   66096 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/15242-42743/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1031 18:35:32.160777   66096 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.27.1
	I1031 18:35:32.161265   66096 cni.go:95] Creating CNI manager for ""
	I1031 18:35:32.161293   66096 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I1031 18:35:32.161308   66096 start_flags.go:317] config:
	{Name:kubernetes-upgrade-183258 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15159/minikube-v1.27.0-1666206003-15159-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:kubernetes-upgrade-183258 Name
space:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.232 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-a
liases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1031 18:35:32.161893   66096 iso.go:124] acquiring lock: {Name:mk1b8df3d0e7e7151d07f634c55bc8cb360d70d6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1031 18:35:32.164046   66096 out.go:177] * Starting control plane node kubernetes-upgrade-183258 in cluster kubernetes-upgrade-183258
	I1031 18:35:32.165465   66096 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I1031 18:35:32.165518   66096 preload.go:148] Found local preload: /home/jenkins/minikube-integration/15242-42743/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4
	I1031 18:35:32.165545   66096 cache.go:57] Caching tarball of preloaded images
	I1031 18:35:32.165696   66096 preload.go:174] Found /home/jenkins/minikube-integration/15242-42743/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1031 18:35:32.165724   66096 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.3 on docker
	I1031 18:35:32.165908   66096 profile.go:148] Saving config to /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/kubernetes-upgrade-183258/config.json ...
	I1031 18:35:32.166145   66096 cache.go:208] Successfully downloaded all kic artifacts
	I1031 18:35:32.166173   66096 start.go:364] acquiring machines lock for kubernetes-upgrade-183258: {Name:mk15de2cb0eed92cba3648c402e45ec73a1cbfb5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1031 18:35:32.166253   66096 start.go:368] acquired machines lock for "kubernetes-upgrade-183258" in 57.15µs
	I1031 18:35:32.166273   66096 start.go:96] Skipping create...Using existing machine configuration
	I1031 18:35:32.166279   66096 fix.go:55] fixHost starting: 
	I1031 18:35:32.166721   66096 main.go:134] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1031 18:35:32.166768   66096 main.go:134] libmachine: Launching plugin server for driver kvm2
	I1031 18:35:32.188392   66096 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:42979
	I1031 18:35:32.189140   66096 main.go:134] libmachine: () Calling .GetVersion
	I1031 18:35:32.189715   66096 main.go:134] libmachine: Using API Version  1
	I1031 18:35:32.189744   66096 main.go:134] libmachine: () Calling .SetConfigRaw
	I1031 18:35:32.190169   66096 main.go:134] libmachine: () Calling .GetMachineName
	I1031 18:35:32.190381   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) Calling .DriverName
	I1031 18:35:32.190544   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) Calling .GetState
	I1031 18:35:32.192641   66096 fix.go:103] recreateIfNeeded on kubernetes-upgrade-183258: state=Running err=<nil>
	W1031 18:35:32.192665   66096 fix.go:129] unexpected machine state, will restart: <nil>
	I1031 18:35:32.194473   66096 out.go:177] * Updating the running kvm2 "kubernetes-upgrade-183258" VM ...
	I1031 18:35:32.195575   66096 machine.go:88] provisioning docker machine ...
	I1031 18:35:32.195602   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) Calling .DriverName
	I1031 18:35:32.195864   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) Calling .GetMachineName
	I1031 18:35:32.196043   66096 buildroot.go:166] provisioning hostname "kubernetes-upgrade-183258"
	I1031 18:35:32.196071   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) Calling .GetMachineName
	I1031 18:35:32.196249   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) Calling .GetSSHHostname
	I1031 18:35:32.199322   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) DBG | domain kubernetes-upgrade-183258 has defined MAC address 52:54:00:93:dd:ed in network mk-kubernetes-upgrade-183258
	I1031 18:35:32.199847   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:dd:ed", ip: ""} in network mk-kubernetes-upgrade-183258: {Iface:virbr1 ExpiryTime:2022-10-31 19:33:12 +0000 UTC Type:0 Mac:52:54:00:93:dd:ed Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:kubernetes-upgrade-183258 Clientid:01:52:54:00:93:dd:ed}
	I1031 18:35:32.199876   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) DBG | domain kubernetes-upgrade-183258 has defined IP address 192.168.39.232 and MAC address 52:54:00:93:dd:ed in network mk-kubernetes-upgrade-183258
	I1031 18:35:32.200039   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) Calling .GetSSHPort
	I1031 18:35:32.200212   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) Calling .GetSSHKeyPath
	I1031 18:35:32.200353   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) Calling .GetSSHKeyPath
	I1031 18:35:32.200510   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) Calling .GetSSHUsername
	I1031 18:35:32.200688   66096 main.go:134] libmachine: Using SSH client type: native
	I1031 18:35:32.200878   66096 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ed4e0] 0x7f0660 <nil>  [] 0s} 192.168.39.232 22 <nil> <nil>}
	I1031 18:35:32.200894   66096 main.go:134] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-183258 && echo "kubernetes-upgrade-183258" | sudo tee /etc/hostname
	I1031 18:35:32.354527   66096 main.go:134] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-183258
	
	I1031 18:35:32.354553   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) Calling .GetSSHHostname
	I1031 18:35:32.357394   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) DBG | domain kubernetes-upgrade-183258 has defined MAC address 52:54:00:93:dd:ed in network mk-kubernetes-upgrade-183258
	I1031 18:35:32.357801   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:dd:ed", ip: ""} in network mk-kubernetes-upgrade-183258: {Iface:virbr1 ExpiryTime:2022-10-31 19:33:12 +0000 UTC Type:0 Mac:52:54:00:93:dd:ed Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:kubernetes-upgrade-183258 Clientid:01:52:54:00:93:dd:ed}
	I1031 18:35:32.357850   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) DBG | domain kubernetes-upgrade-183258 has defined IP address 192.168.39.232 and MAC address 52:54:00:93:dd:ed in network mk-kubernetes-upgrade-183258
	I1031 18:35:32.358072   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) Calling .GetSSHPort
	I1031 18:35:32.358277   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) Calling .GetSSHKeyPath
	I1031 18:35:32.358473   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) Calling .GetSSHKeyPath
	I1031 18:35:32.358697   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) Calling .GetSSHUsername
	I1031 18:35:32.358904   66096 main.go:134] libmachine: Using SSH client type: native
	I1031 18:35:32.359053   66096 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ed4e0] 0x7f0660 <nil>  [] 0s} 192.168.39.232 22 <nil> <nil>}
	I1031 18:35:32.359071   66096 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-183258' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-183258/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-183258' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1031 18:35:32.505476   66096 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I1031 18:35:32.505510   66096 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/15242-42743/.minikube CaCertPath:/home/jenkins/minikube-integration/15242-42743/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/15242-42743/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/15242-42743/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/15242-42743/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/15242-42743/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/15242-42743/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/15242-42743/.minikube}
	I1031 18:35:32.505554   66096 buildroot.go:174] setting up certificates
	I1031 18:35:32.505566   66096 provision.go:83] configureAuth start
	I1031 18:35:32.505579   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) Calling .GetMachineName
	I1031 18:35:32.505972   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) Calling .GetIP
	I1031 18:35:32.509384   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) DBG | domain kubernetes-upgrade-183258 has defined MAC address 52:54:00:93:dd:ed in network mk-kubernetes-upgrade-183258
	I1031 18:35:32.509848   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:dd:ed", ip: ""} in network mk-kubernetes-upgrade-183258: {Iface:virbr1 ExpiryTime:2022-10-31 19:33:12 +0000 UTC Type:0 Mac:52:54:00:93:dd:ed Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:kubernetes-upgrade-183258 Clientid:01:52:54:00:93:dd:ed}
	I1031 18:35:32.509886   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) DBG | domain kubernetes-upgrade-183258 has defined IP address 192.168.39.232 and MAC address 52:54:00:93:dd:ed in network mk-kubernetes-upgrade-183258
	I1031 18:35:32.510285   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) Calling .GetSSHHostname
	I1031 18:35:32.512944   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) DBG | domain kubernetes-upgrade-183258 has defined MAC address 52:54:00:93:dd:ed in network mk-kubernetes-upgrade-183258
	I1031 18:35:32.513351   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:dd:ed", ip: ""} in network mk-kubernetes-upgrade-183258: {Iface:virbr1 ExpiryTime:2022-10-31 19:33:12 +0000 UTC Type:0 Mac:52:54:00:93:dd:ed Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:kubernetes-upgrade-183258 Clientid:01:52:54:00:93:dd:ed}
	I1031 18:35:32.513396   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) DBG | domain kubernetes-upgrade-183258 has defined IP address 192.168.39.232 and MAC address 52:54:00:93:dd:ed in network mk-kubernetes-upgrade-183258
	I1031 18:35:32.513626   66096 provision.go:138] copyHostCerts
	I1031 18:35:32.513692   66096 exec_runner.go:144] found /home/jenkins/minikube-integration/15242-42743/.minikube/ca.pem, removing ...
	I1031 18:35:32.513711   66096 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15242-42743/.minikube/ca.pem
	I1031 18:35:32.513771   66096 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15242-42743/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/15242-42743/.minikube/ca.pem (1078 bytes)
	I1031 18:35:32.513877   66096 exec_runner.go:144] found /home/jenkins/minikube-integration/15242-42743/.minikube/cert.pem, removing ...
	I1031 18:35:32.513889   66096 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15242-42743/.minikube/cert.pem
	I1031 18:35:32.513952   66096 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15242-42743/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/15242-42743/.minikube/cert.pem (1123 bytes)
	I1031 18:35:32.514020   66096 exec_runner.go:144] found /home/jenkins/minikube-integration/15242-42743/.minikube/key.pem, removing ...
	I1031 18:35:32.514033   66096 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15242-42743/.minikube/key.pem
	I1031 18:35:32.514061   66096 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15242-42743/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/15242-42743/.minikube/key.pem (1675 bytes)
	I1031 18:35:32.514120   66096 provision.go:112] generating server cert: /home/jenkins/minikube-integration/15242-42743/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/15242-42743/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/15242-42743/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-183258 san=[192.168.39.232 192.168.39.232 localhost 127.0.0.1 minikube kubernetes-upgrade-183258]
	I1031 18:35:32.774490   66096 provision.go:172] copyRemoteCerts
	I1031 18:35:32.774866   66096 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1031 18:35:32.774913   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) Calling .GetSSHHostname
	I1031 18:35:32.779602   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) DBG | domain kubernetes-upgrade-183258 has defined MAC address 52:54:00:93:dd:ed in network mk-kubernetes-upgrade-183258
	I1031 18:35:32.780375   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:dd:ed", ip: ""} in network mk-kubernetes-upgrade-183258: {Iface:virbr1 ExpiryTime:2022-10-31 19:33:12 +0000 UTC Type:0 Mac:52:54:00:93:dd:ed Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:kubernetes-upgrade-183258 Clientid:01:52:54:00:93:dd:ed}
	I1031 18:35:32.780469   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) DBG | domain kubernetes-upgrade-183258 has defined IP address 192.168.39.232 and MAC address 52:54:00:93:dd:ed in network mk-kubernetes-upgrade-183258
	I1031 18:35:32.780746   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) Calling .GetSSHPort
	I1031 18:35:32.781043   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) Calling .GetSSHKeyPath
	I1031 18:35:32.781207   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) Calling .GetSSHUsername
	I1031 18:35:32.781358   66096 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15242-42743/.minikube/machines/kubernetes-upgrade-183258/id_rsa Username:docker}
	I1031 18:35:32.894096   66096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15242-42743/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1031 18:35:32.926015   66096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15242-42743/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1031 18:35:32.961546   66096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15242-42743/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1031 18:35:32.993422   66096 provision.go:86] duration metric: configureAuth took 487.840281ms
	I1031 18:35:32.993456   66096 buildroot.go:189] setting minikube options for container-runtime
	I1031 18:35:32.993718   66096 config.go:180] Loaded profile config "kubernetes-upgrade-183258": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1031 18:35:32.993759   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) Calling .DriverName
	I1031 18:35:32.994094   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) Calling .GetSSHHostname
	I1031 18:35:32.997352   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) DBG | domain kubernetes-upgrade-183258 has defined MAC address 52:54:00:93:dd:ed in network mk-kubernetes-upgrade-183258
	I1031 18:35:32.997893   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:dd:ed", ip: ""} in network mk-kubernetes-upgrade-183258: {Iface:virbr1 ExpiryTime:2022-10-31 19:33:12 +0000 UTC Type:0 Mac:52:54:00:93:dd:ed Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:kubernetes-upgrade-183258 Clientid:01:52:54:00:93:dd:ed}
	I1031 18:35:32.997971   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) DBG | domain kubernetes-upgrade-183258 has defined IP address 192.168.39.232 and MAC address 52:54:00:93:dd:ed in network mk-kubernetes-upgrade-183258
	I1031 18:35:32.998216   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) Calling .GetSSHPort
	I1031 18:35:32.998431   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) Calling .GetSSHKeyPath
	I1031 18:35:32.998639   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) Calling .GetSSHKeyPath
	I1031 18:35:32.998807   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) Calling .GetSSHUsername
	I1031 18:35:32.999025   66096 main.go:134] libmachine: Using SSH client type: native
	I1031 18:35:32.999187   66096 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ed4e0] 0x7f0660 <nil>  [] 0s} 192.168.39.232 22 <nil> <nil>}
	I1031 18:35:32.999208   66096 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1031 18:35:33.146628   66096 main.go:134] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1031 18:35:33.146657   66096 buildroot.go:70] root file system type: tmpfs
	I1031 18:35:33.146843   66096 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1031 18:35:33.146873   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) Calling .GetSSHHostname
	I1031 18:35:33.150479   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) DBG | domain kubernetes-upgrade-183258 has defined MAC address 52:54:00:93:dd:ed in network mk-kubernetes-upgrade-183258
	I1031 18:35:33.150913   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:dd:ed", ip: ""} in network mk-kubernetes-upgrade-183258: {Iface:virbr1 ExpiryTime:2022-10-31 19:33:12 +0000 UTC Type:0 Mac:52:54:00:93:dd:ed Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:kubernetes-upgrade-183258 Clientid:01:52:54:00:93:dd:ed}
	I1031 18:35:33.150986   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) DBG | domain kubernetes-upgrade-183258 has defined IP address 192.168.39.232 and MAC address 52:54:00:93:dd:ed in network mk-kubernetes-upgrade-183258
	I1031 18:35:33.151295   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) Calling .GetSSHPort
	I1031 18:35:33.151502   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) Calling .GetSSHKeyPath
	I1031 18:35:33.151724   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) Calling .GetSSHKeyPath
	I1031 18:35:33.151937   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) Calling .GetSSHUsername
	I1031 18:35:33.152131   66096 main.go:134] libmachine: Using SSH client type: native
	I1031 18:35:33.152304   66096 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ed4e0] 0x7f0660 <nil>  [] 0s} 192.168.39.232 22 <nil> <nil>}
	I1031 18:35:33.152407   66096 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1031 18:35:33.314469   66096 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1031 18:35:33.314511   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) Calling .GetSSHHostname
	I1031 18:35:33.317981   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) DBG | domain kubernetes-upgrade-183258 has defined MAC address 52:54:00:93:dd:ed in network mk-kubernetes-upgrade-183258
	I1031 18:35:33.318445   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:dd:ed", ip: ""} in network mk-kubernetes-upgrade-183258: {Iface:virbr1 ExpiryTime:2022-10-31 19:33:12 +0000 UTC Type:0 Mac:52:54:00:93:dd:ed Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:kubernetes-upgrade-183258 Clientid:01:52:54:00:93:dd:ed}
	I1031 18:35:33.318488   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) DBG | domain kubernetes-upgrade-183258 has defined IP address 192.168.39.232 and MAC address 52:54:00:93:dd:ed in network mk-kubernetes-upgrade-183258
	I1031 18:35:33.318721   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) Calling .GetSSHPort
	I1031 18:35:33.319008   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) Calling .GetSSHKeyPath
	I1031 18:35:33.319316   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) Calling .GetSSHKeyPath
	I1031 18:35:33.319518   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) Calling .GetSSHUsername
	I1031 18:35:33.319700   66096 main.go:134] libmachine: Using SSH client type: native
	I1031 18:35:33.319850   66096 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ed4e0] 0x7f0660 <nil>  [] 0s} 192.168.39.232 22 <nil> <nil>}
	I1031 18:35:33.319875   66096 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1031 18:35:33.466468   66096 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I1031 18:35:33.466511   66096 machine.go:91] provisioned docker machine in 1.270917739s
	I1031 18:35:33.466524   66096 start.go:300] post-start starting for "kubernetes-upgrade-183258" (driver="kvm2")
	I1031 18:35:33.466533   66096 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1031 18:35:33.466559   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) Calling .DriverName
	I1031 18:35:33.466880   66096 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1031 18:35:33.466930   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) Calling .GetSSHHostname
	I1031 18:35:33.470346   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) DBG | domain kubernetes-upgrade-183258 has defined MAC address 52:54:00:93:dd:ed in network mk-kubernetes-upgrade-183258
	I1031 18:35:33.470769   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:dd:ed", ip: ""} in network mk-kubernetes-upgrade-183258: {Iface:virbr1 ExpiryTime:2022-10-31 19:33:12 +0000 UTC Type:0 Mac:52:54:00:93:dd:ed Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:kubernetes-upgrade-183258 Clientid:01:52:54:00:93:dd:ed}
	I1031 18:35:33.470808   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) DBG | domain kubernetes-upgrade-183258 has defined IP address 192.168.39.232 and MAC address 52:54:00:93:dd:ed in network mk-kubernetes-upgrade-183258
	I1031 18:35:33.471024   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) Calling .GetSSHPort
	I1031 18:35:33.471252   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) Calling .GetSSHKeyPath
	I1031 18:35:33.471444   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) Calling .GetSSHUsername
	I1031 18:35:33.471635   66096 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15242-42743/.minikube/machines/kubernetes-upgrade-183258/id_rsa Username:docker}
	I1031 18:35:33.571386   66096 ssh_runner.go:195] Run: cat /etc/os-release
	I1031 18:35:33.575986   66096 info.go:137] Remote host: Buildroot 2021.02.12
	I1031 18:35:33.576021   66096 filesync.go:126] Scanning /home/jenkins/minikube-integration/15242-42743/.minikube/addons for local assets ...
	I1031 18:35:33.576097   66096 filesync.go:126] Scanning /home/jenkins/minikube-integration/15242-42743/.minikube/files for local assets ...
	I1031 18:35:33.576199   66096 filesync.go:149] local asset: /home/jenkins/minikube-integration/15242-42743/.minikube/files/etc/ssl/certs/495292.pem -> 495292.pem in /etc/ssl/certs
	I1031 18:35:33.576304   66096 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1031 18:35:33.588125   66096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15242-42743/.minikube/files/etc/ssl/certs/495292.pem --> /etc/ssl/certs/495292.pem (1708 bytes)
	I1031 18:35:33.619971   66096 start.go:303] post-start completed in 153.431477ms
	I1031 18:35:33.620000   66096 fix.go:57] fixHost completed within 1.453720809s
	I1031 18:35:33.620029   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) Calling .GetSSHHostname
	I1031 18:35:33.623346   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) DBG | domain kubernetes-upgrade-183258 has defined MAC address 52:54:00:93:dd:ed in network mk-kubernetes-upgrade-183258
	I1031 18:35:33.623772   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:dd:ed", ip: ""} in network mk-kubernetes-upgrade-183258: {Iface:virbr1 ExpiryTime:2022-10-31 19:33:12 +0000 UTC Type:0 Mac:52:54:00:93:dd:ed Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:kubernetes-upgrade-183258 Clientid:01:52:54:00:93:dd:ed}
	I1031 18:35:33.623815   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) DBG | domain kubernetes-upgrade-183258 has defined IP address 192.168.39.232 and MAC address 52:54:00:93:dd:ed in network mk-kubernetes-upgrade-183258
	I1031 18:35:33.624190   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) Calling .GetSSHPort
	I1031 18:35:33.624407   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) Calling .GetSSHKeyPath
	I1031 18:35:33.624576   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) Calling .GetSSHKeyPath
	I1031 18:35:33.624788   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) Calling .GetSSHUsername
	I1031 18:35:33.624999   66096 main.go:134] libmachine: Using SSH client type: native
	I1031 18:35:33.625211   66096 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ed4e0] 0x7f0660 <nil>  [] 0s} 192.168.39.232 22 <nil> <nil>}
	I1031 18:35:33.625235   66096 main.go:134] libmachine: About to run SSH command:
	date +%s.%N
	I1031 18:35:33.820237   66096 main.go:134] libmachine: SSH cmd err, output: <nil>: 1667241333.785620347
	
	I1031 18:35:33.820270   66096 fix.go:207] guest clock: 1667241333.785620347
	I1031 18:35:33.820282   66096 fix.go:220] Guest: 2022-10-31 18:35:33.785620347 +0000 UTC Remote: 2022-10-31 18:35:33.620005073 +0000 UTC m=+1.804122754 (delta=165.615274ms)
	I1031 18:35:33.820307   66096 fix.go:191] guest clock delta is within tolerance: 165.615274ms
	I1031 18:35:33.820315   66096 start.go:83] releasing machines lock for "kubernetes-upgrade-183258", held for 1.654048416s
	I1031 18:35:33.820361   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) Calling .DriverName
	I1031 18:35:33.820744   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) Calling .GetIP
	I1031 18:35:33.824345   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) DBG | domain kubernetes-upgrade-183258 has defined MAC address 52:54:00:93:dd:ed in network mk-kubernetes-upgrade-183258
	I1031 18:35:33.824871   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:dd:ed", ip: ""} in network mk-kubernetes-upgrade-183258: {Iface:virbr1 ExpiryTime:2022-10-31 19:33:12 +0000 UTC Type:0 Mac:52:54:00:93:dd:ed Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:kubernetes-upgrade-183258 Clientid:01:52:54:00:93:dd:ed}
	I1031 18:35:33.824909   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) DBG | domain kubernetes-upgrade-183258 has defined IP address 192.168.39.232 and MAC address 52:54:00:93:dd:ed in network mk-kubernetes-upgrade-183258
	I1031 18:35:33.825086   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) Calling .DriverName
	I1031 18:35:33.825550   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) Calling .DriverName
	I1031 18:35:33.825708   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) Calling .DriverName
	I1031 18:35:33.825783   66096 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1031 18:35:33.825833   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) Calling .GetSSHHostname
	I1031 18:35:33.826190   66096 ssh_runner.go:195] Run: systemctl --version
	I1031 18:35:33.826221   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) Calling .GetSSHHostname
	I1031 18:35:33.829455   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) DBG | domain kubernetes-upgrade-183258 has defined MAC address 52:54:00:93:dd:ed in network mk-kubernetes-upgrade-183258
	I1031 18:35:33.829695   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) DBG | domain kubernetes-upgrade-183258 has defined MAC address 52:54:00:93:dd:ed in network mk-kubernetes-upgrade-183258
	I1031 18:35:33.830111   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:dd:ed", ip: ""} in network mk-kubernetes-upgrade-183258: {Iface:virbr1 ExpiryTime:2022-10-31 19:33:12 +0000 UTC Type:0 Mac:52:54:00:93:dd:ed Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:kubernetes-upgrade-183258 Clientid:01:52:54:00:93:dd:ed}
	I1031 18:35:33.830170   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:dd:ed", ip: ""} in network mk-kubernetes-upgrade-183258: {Iface:virbr1 ExpiryTime:2022-10-31 19:33:12 +0000 UTC Type:0 Mac:52:54:00:93:dd:ed Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:kubernetes-upgrade-183258 Clientid:01:52:54:00:93:dd:ed}
	I1031 18:35:33.830215   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) DBG | domain kubernetes-upgrade-183258 has defined IP address 192.168.39.232 and MAC address 52:54:00:93:dd:ed in network mk-kubernetes-upgrade-183258
	I1031 18:35:33.830243   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) DBG | domain kubernetes-upgrade-183258 has defined IP address 192.168.39.232 and MAC address 52:54:00:93:dd:ed in network mk-kubernetes-upgrade-183258
	I1031 18:35:33.830476   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) Calling .GetSSHPort
	I1031 18:35:33.830685   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) Calling .GetSSHPort
	I1031 18:35:33.830690   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) Calling .GetSSHKeyPath
	I1031 18:35:33.830902   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) Calling .GetSSHKeyPath
	I1031 18:35:33.830989   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) Calling .GetSSHUsername
	I1031 18:35:33.831095   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) Calling .GetSSHUsername
	I1031 18:35:33.831202   66096 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15242-42743/.minikube/machines/kubernetes-upgrade-183258/id_rsa Username:docker}
	I1031 18:35:33.831310   66096 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15242-42743/.minikube/machines/kubernetes-upgrade-183258/id_rsa Username:docker}
	I1031 18:35:33.968321   66096 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I1031 18:35:33.968511   66096 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1031 18:35:34.001728   66096 docker.go:613] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.25.3
	registry.k8s.io/kube-scheduler:v1.25.3
	registry.k8s.io/kube-controller-manager:v1.25.3
	registry.k8s.io/kube-proxy:v1.25.3
	registry.k8s.io/pause:3.8
	registry.k8s.io/etcd:3.5.4-0
	registry.k8s.io/coredns/coredns:v1.9.3
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I1031 18:35:34.001749   66096 docker.go:543] Images already preloaded, skipping extraction
	I1031 18:35:34.001818   66096 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1031 18:35:34.015346   66096 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1031 18:35:34.028968   66096 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1031 18:35:34.041869   66096 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1031 18:35:34.065099   66096 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1031 18:35:34.231538   66096 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1031 18:35:34.398118   66096 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1031 18:35:34.613712   66096 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1031 18:35:46.650394   66096 ssh_runner.go:235] Completed: sudo systemctl restart docker: (12.036593604s)
	I1031 18:35:46.654224   66096 out.go:177] 
	W1031 18:35:46.656183   66096 out.go:239] X Exiting due to RUNTIME_ENABLE: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	X Exiting due to RUNTIME_ENABLE: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	W1031 18:35:46.656205   66096 out.go:239] * 
	* 
	W1031 18:35:46.657049   66096 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1031 18:35:46.674844   66096 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:284: start after failed upgrade: out/minikube-linux-amd64 start -p kubernetes-upgrade-183258 --memory=2200 --kubernetes-version=v1.25.3 --alsologtostderr -v=1 --driver=kvm2 : exit status 90
version_upgrade_test.go:286: *** TestKubernetesUpgrade FAILED at 2022-10-31 18:35:46.688315096 +0000 UTC m=+3336.155817603
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-183258 -n kubernetes-upgrade-183258

                                                
                                                
=== CONT  TestKubernetesUpgrade
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-183258 -n kubernetes-upgrade-183258: exit status 2 (292.337927ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-183258 logs -n 25
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |------------|------------------------------------------------|---------------------------|----------|---------|---------------------|---------------------|
	|  Command   |                      Args                      |          Profile          |   User   | Version |     Start Time      |      End Time       |
	|------------|------------------------------------------------|---------------------------|----------|---------|---------------------|---------------------|
	| stop       | -p scheduled-stop-182920                       | scheduled-stop-182920     | jenkins  | v1.27.1 | 31 Oct 22 18:30 UTC |                     |
	|            | --schedule 5m                                  |                           |          |         |                     |                     |
	| stop       | -p scheduled-stop-182920                       | scheduled-stop-182920     | jenkins  | v1.27.1 | 31 Oct 22 18:30 UTC |                     |
	|            | --schedule 15s                                 |                           |          |         |                     |                     |
	| stop       | -p scheduled-stop-182920                       | scheduled-stop-182920     | jenkins  | v1.27.1 | 31 Oct 22 18:30 UTC |                     |
	|            | --schedule 15s                                 |                           |          |         |                     |                     |
	| stop       | -p scheduled-stop-182920                       | scheduled-stop-182920     | jenkins  | v1.27.1 | 31 Oct 22 18:30 UTC |                     |
	|            | --schedule 15s                                 |                           |          |         |                     |                     |
	| stop       | -p scheduled-stop-182920                       | scheduled-stop-182920     | jenkins  | v1.27.1 | 31 Oct 22 18:30 UTC | 31 Oct 22 18:30 UTC |
	|            | --cancel-scheduled                             |                           |          |         |                     |                     |
	| stop       | -p scheduled-stop-182920                       | scheduled-stop-182920     | jenkins  | v1.27.1 | 31 Oct 22 18:30 UTC |                     |
	|            | --schedule 15s                                 |                           |          |         |                     |                     |
	| stop       | -p scheduled-stop-182920                       | scheduled-stop-182920     | jenkins  | v1.27.1 | 31 Oct 22 18:30 UTC |                     |
	|            | --schedule 15s                                 |                           |          |         |                     |                     |
	| stop       | -p scheduled-stop-182920                       | scheduled-stop-182920     | jenkins  | v1.27.1 | 31 Oct 22 18:30 UTC | 31 Oct 22 18:30 UTC |
	|            | --schedule 15s                                 |                           |          |         |                     |                     |
	| delete     | -p scheduled-stop-182920                       | scheduled-stop-182920     | jenkins  | v1.27.1 | 31 Oct 22 18:31 UTC | 31 Oct 22 18:31 UTC |
	| start      | -p skaffold-183126                             | skaffold-183126           | jenkins  | v1.27.1 | 31 Oct 22 18:31 UTC | 31 Oct 22 18:32 UTC |
	|            | --memory=2600 --driver=kvm2                    |                           |          |         |                     |                     |
	| docker-env | --shell none -p                                | skaffold-183126           | skaffold | v1.27.1 | 31 Oct 22 18:32 UTC | 31 Oct 22 18:32 UTC |
	|            | skaffold-183126                                |                           |          |         |                     |                     |
	|            | --user=skaffold                                |                           |          |         |                     |                     |
	| delete     | -p skaffold-183126                             | skaffold-183126           | jenkins  | v1.27.1 | 31 Oct 22 18:32 UTC | 31 Oct 22 18:32 UTC |
	| start      | -p kubernetes-upgrade-183258                   | kubernetes-upgrade-183258 | jenkins  | v1.27.1 | 31 Oct 22 18:32 UTC | 31 Oct 22 18:34 UTC |
	|            | --memory=2200                                  |                           |          |         |                     |                     |
	|            | --kubernetes-version=v1.16.0                   |                           |          |         |                     |                     |
	|            | --alsologtostderr -v=1                         |                           |          |         |                     |                     |
	|            | --driver=kvm2                                  |                           |          |         |                     |                     |
	| start      | -p gvisor-183258 --memory=2200                 | gvisor-183258             | jenkins  | v1.27.1 | 31 Oct 22 18:32 UTC | 31 Oct 22 18:35 UTC |
	|            | --container-runtime=containerd --docker-opt    |                           |          |         |                     |                     |
	|            | containerd=/var/run/containerd/containerd.sock |                           |          |         |                     |                     |
	|            | --driver=kvm2                                  |                           |          |         |                     |                     |
	| start      | -p force-systemd-flag-183258                   | force-systemd-flag-183258 | jenkins  | v1.27.1 | 31 Oct 22 18:32 UTC | 31 Oct 22 18:34 UTC |
	|            | --memory=2048 --force-systemd                  |                           |          |         |                     |                     |
	|            | --alsologtostderr -v=5                         |                           |          |         |                     |                     |
	|            | --driver=kvm2                                  |                           |          |         |                     |                     |
	| start      | -p offline-docker-183258                       | offline-docker-183258     | jenkins  | v1.27.1 | 31 Oct 22 18:32 UTC | 31 Oct 22 18:34 UTC |
	|            | --alsologtostderr -v=1                         |                           |          |         |                     |                     |
	|            | --memory=2048 --wait=true                      |                           |          |         |                     |                     |
	|            | --driver=kvm2                                  |                           |          |         |                     |                     |
	| stop       | -p kubernetes-upgrade-183258                   | kubernetes-upgrade-183258 | jenkins  | v1.27.1 | 31 Oct 22 18:34 UTC | 31 Oct 22 18:34 UTC |
	| start      | -p kubernetes-upgrade-183258                   | kubernetes-upgrade-183258 | jenkins  | v1.27.1 | 31 Oct 22 18:34 UTC | 31 Oct 22 18:35 UTC |
	|            | --memory=2200                                  |                           |          |         |                     |                     |
	|            | --kubernetes-version=v1.25.3                   |                           |          |         |                     |                     |
	|            | --alsologtostderr -v=1                         |                           |          |         |                     |                     |
	|            | --driver=kvm2                                  |                           |          |         |                     |                     |
	| delete     | -p offline-docker-183258                       | offline-docker-183258     | jenkins  | v1.27.1 | 31 Oct 22 18:34 UTC | 31 Oct 22 18:34 UTC |
	| ssh        | force-systemd-flag-183258                      | force-systemd-flag-183258 | jenkins  | v1.27.1 | 31 Oct 22 18:34 UTC | 31 Oct 22 18:34 UTC |
	|            | ssh docker info --format                       |                           |          |         |                     |                     |
	|            | {{.CgroupDriver}}                              |                           |          |         |                     |                     |
	| delete     | -p force-systemd-flag-183258                   | force-systemd-flag-183258 | jenkins  | v1.27.1 | 31 Oct 22 18:34 UTC | 31 Oct 22 18:34 UTC |
	| cache      | gvisor-183258 cache add                        | gvisor-183258             | jenkins  | v1.27.1 | 31 Oct 22 18:35 UTC | 31 Oct 22 18:35 UTC |
	|            | gcr.io/k8s-minikube/gvisor-addon:2             |                           |          |         |                     |                     |
	| start      | -p kubernetes-upgrade-183258                   | kubernetes-upgrade-183258 | jenkins  | v1.27.1 | 31 Oct 22 18:35 UTC |                     |
	|            | --memory=2200                                  |                           |          |         |                     |                     |
	|            | --kubernetes-version=v1.16.0                   |                           |          |         |                     |                     |
	|            | --driver=kvm2                                  |                           |          |         |                     |                     |
	| start      | -p kubernetes-upgrade-183258                   | kubernetes-upgrade-183258 | jenkins  | v1.27.1 | 31 Oct 22 18:35 UTC |                     |
	|            | --memory=2200                                  |                           |          |         |                     |                     |
	|            | --kubernetes-version=v1.25.3                   |                           |          |         |                     |                     |
	|            | --alsologtostderr -v=1                         |                           |          |         |                     |                     |
	|            | --driver=kvm2                                  |                           |          |         |                     |                     |
	| addons     | gvisor-183258 addons enable                    | gvisor-183258             | jenkins  | v1.27.1 | 31 Oct 22 18:35 UTC | 31 Oct 22 18:35 UTC |
	|            | gvisor                                         |                           |          |         |                     |                     |
	|------------|------------------------------------------------|---------------------------|----------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/10/31 18:35:31
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.19.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1031 18:35:31.911476   66096 out.go:296] Setting OutFile to fd 1 ...
	I1031 18:35:31.911621   66096 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 18:35:31.911640   66096 out.go:309] Setting ErrFile to fd 2...
	I1031 18:35:31.911647   66096 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 18:35:31.911854   66096 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15242-42743/.minikube/bin
	I1031 18:35:31.912610   66096 out.go:303] Setting JSON to false
	I1031 18:35:31.913777   66096 start.go:116] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":8284,"bootTime":1667233048,"procs":243,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1021-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1031 18:35:31.913915   66096 start.go:126] virtualization: kvm guest
	I1031 18:35:32.001900   66096 out.go:177] * [kubernetes-upgrade-183258] minikube v1.27.1 on Ubuntu 20.04 (kvm/amd64)
	I1031 18:35:32.003561   66096 notify.go:220] Checking for updates...
	I1031 18:35:32.004947   66096 out.go:177]   - MINIKUBE_LOCATION=15242
	I1031 18:35:32.006845   66096 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1031 18:35:32.008281   66096 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/15242-42743/kubeconfig
	I1031 18:35:32.009646   66096 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/15242-42743/.minikube
	I1031 18:35:32.010900   66096 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1031 18:35:32.012759   66096 config.go:180] Loaded profile config "kubernetes-upgrade-183258": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1031 18:35:32.013322   66096 main.go:134] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1031 18:35:32.013375   66096 main.go:134] libmachine: Launching plugin server for driver kvm2
	I1031 18:35:32.034270   66096 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:42039
	I1031 18:35:32.034740   66096 main.go:134] libmachine: () Calling .GetVersion
	I1031 18:35:32.035339   66096 main.go:134] libmachine: Using API Version  1
	I1031 18:35:32.035366   66096 main.go:134] libmachine: () Calling .SetConfigRaw
	I1031 18:35:32.035783   66096 main.go:134] libmachine: () Calling .GetMachineName
	I1031 18:35:32.035982   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) Calling .DriverName
	I1031 18:35:32.036274   66096 driver.go:365] Setting default libvirt URI to qemu:///system
	I1031 18:35:32.036610   66096 main.go:134] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1031 18:35:32.036657   66096 main.go:134] libmachine: Launching plugin server for driver kvm2
	I1031 18:35:32.060713   66096 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:46047
	I1031 18:35:32.061268   66096 main.go:134] libmachine: () Calling .GetVersion
	I1031 18:35:32.061831   66096 main.go:134] libmachine: Using API Version  1
	I1031 18:35:32.061854   66096 main.go:134] libmachine: () Calling .SetConfigRaw
	I1031 18:35:32.062339   66096 main.go:134] libmachine: () Calling .GetMachineName
	I1031 18:35:32.062516   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) Calling .DriverName
	I1031 18:35:32.128422   66096 out.go:177] * Using the kvm2 driver based on existing profile
	I1031 18:35:32.130977   66096 start.go:282] selected driver: kvm2
	I1031 18:35:32.131008   66096 start.go:808] validating driver "kvm2" against &{Name:kubernetes-upgrade-183258 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15159/minikube-v1.27.0-1666206003-15159-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernet
esConfig:{KubernetesVersion:v1.25.3 ClusterName:kubernetes-upgrade-183258 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.232 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false
olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1031 18:35:32.131192   66096 start.go:819] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1031 18:35:32.132368   66096 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1031 18:35:32.132562   66096 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/15242-42743/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1031 18:35:32.160777   66096 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.27.1
	I1031 18:35:32.161265   66096 cni.go:95] Creating CNI manager for ""
	I1031 18:35:32.161293   66096 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I1031 18:35:32.161308   66096 start_flags.go:317] config:
	{Name:kubernetes-upgrade-183258 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15159/minikube-v1.27.0-1666206003-15159-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:kubernetes-upgrade-183258 Name
space:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.232 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-a
liases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1031 18:35:32.161893   66096 iso.go:124] acquiring lock: {Name:mk1b8df3d0e7e7151d07f634c55bc8cb360d70d6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1031 18:35:32.164046   66096 out.go:177] * Starting control plane node kubernetes-upgrade-183258 in cluster kubernetes-upgrade-183258
	I1031 18:35:32.165465   66096 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I1031 18:35:32.165518   66096 preload.go:148] Found local preload: /home/jenkins/minikube-integration/15242-42743/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4
	I1031 18:35:32.165545   66096 cache.go:57] Caching tarball of preloaded images
	I1031 18:35:32.165696   66096 preload.go:174] Found /home/jenkins/minikube-integration/15242-42743/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1031 18:35:32.165724   66096 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.3 on docker
	I1031 18:35:32.165908   66096 profile.go:148] Saving config to /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/kubernetes-upgrade-183258/config.json ...
	I1031 18:35:32.166145   66096 cache.go:208] Successfully downloaded all kic artifacts
	I1031 18:35:32.166173   66096 start.go:364] acquiring machines lock for kubernetes-upgrade-183258: {Name:mk15de2cb0eed92cba3648c402e45ec73a1cbfb5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1031 18:35:32.166253   66096 start.go:368] acquired machines lock for "kubernetes-upgrade-183258" in 57.15µs
	I1031 18:35:32.166273   66096 start.go:96] Skipping create...Using existing machine configuration
	I1031 18:35:32.166279   66096 fix.go:55] fixHost starting: 
	I1031 18:35:32.166721   66096 main.go:134] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1031 18:35:32.166768   66096 main.go:134] libmachine: Launching plugin server for driver kvm2
	I1031 18:35:32.188392   66096 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:42979
	I1031 18:35:32.189140   66096 main.go:134] libmachine: () Calling .GetVersion
	I1031 18:35:32.189715   66096 main.go:134] libmachine: Using API Version  1
	I1031 18:35:32.189744   66096 main.go:134] libmachine: () Calling .SetConfigRaw
	I1031 18:35:32.190169   66096 main.go:134] libmachine: () Calling .GetMachineName
	I1031 18:35:32.190381   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) Calling .DriverName
	I1031 18:35:32.190544   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) Calling .GetState
	I1031 18:35:32.192641   66096 fix.go:103] recreateIfNeeded on kubernetes-upgrade-183258: state=Running err=<nil>
	W1031 18:35:32.192665   66096 fix.go:129] unexpected machine state, will restart: <nil>
	I1031 18:35:32.194473   66096 out.go:177] * Updating the running kvm2 "kubernetes-upgrade-183258" VM ...
	I1031 18:35:32.195575   66096 machine.go:88] provisioning docker machine ...
	I1031 18:35:32.195602   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) Calling .DriverName
	I1031 18:35:32.195864   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) Calling .GetMachineName
	I1031 18:35:32.196043   66096 buildroot.go:166] provisioning hostname "kubernetes-upgrade-183258"
	I1031 18:35:32.196071   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) Calling .GetMachineName
	I1031 18:35:32.196249   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) Calling .GetSSHHostname
	I1031 18:35:32.199322   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) DBG | domain kubernetes-upgrade-183258 has defined MAC address 52:54:00:93:dd:ed in network mk-kubernetes-upgrade-183258
	I1031 18:35:32.199847   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:dd:ed", ip: ""} in network mk-kubernetes-upgrade-183258: {Iface:virbr1 ExpiryTime:2022-10-31 19:33:12 +0000 UTC Type:0 Mac:52:54:00:93:dd:ed Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:kubernetes-upgrade-183258 Clientid:01:52:54:00:93:dd:ed}
	I1031 18:35:32.199876   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) DBG | domain kubernetes-upgrade-183258 has defined IP address 192.168.39.232 and MAC address 52:54:00:93:dd:ed in network mk-kubernetes-upgrade-183258
	I1031 18:35:32.200039   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) Calling .GetSSHPort
	I1031 18:35:32.200212   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) Calling .GetSSHKeyPath
	I1031 18:35:32.200353   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) Calling .GetSSHKeyPath
	I1031 18:35:32.200510   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) Calling .GetSSHUsername
	I1031 18:35:32.200688   66096 main.go:134] libmachine: Using SSH client type: native
	I1031 18:35:32.200878   66096 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ed4e0] 0x7f0660 <nil>  [] 0s} 192.168.39.232 22 <nil> <nil>}
	I1031 18:35:32.200894   66096 main.go:134] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-183258 && echo "kubernetes-upgrade-183258" | sudo tee /etc/hostname
	I1031 18:35:32.354527   66096 main.go:134] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-183258
	
	I1031 18:35:32.354553   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) Calling .GetSSHHostname
	I1031 18:35:32.357394   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) DBG | domain kubernetes-upgrade-183258 has defined MAC address 52:54:00:93:dd:ed in network mk-kubernetes-upgrade-183258
	I1031 18:35:32.357801   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:dd:ed", ip: ""} in network mk-kubernetes-upgrade-183258: {Iface:virbr1 ExpiryTime:2022-10-31 19:33:12 +0000 UTC Type:0 Mac:52:54:00:93:dd:ed Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:kubernetes-upgrade-183258 Clientid:01:52:54:00:93:dd:ed}
	I1031 18:35:32.357850   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) DBG | domain kubernetes-upgrade-183258 has defined IP address 192.168.39.232 and MAC address 52:54:00:93:dd:ed in network mk-kubernetes-upgrade-183258
	I1031 18:35:32.358072   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) Calling .GetSSHPort
	I1031 18:35:32.358277   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) Calling .GetSSHKeyPath
	I1031 18:35:32.358473   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) Calling .GetSSHKeyPath
	I1031 18:35:32.358697   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) Calling .GetSSHUsername
	I1031 18:35:32.358904   66096 main.go:134] libmachine: Using SSH client type: native
	I1031 18:35:32.359053   66096 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ed4e0] 0x7f0660 <nil>  [] 0s} 192.168.39.232 22 <nil> <nil>}
	I1031 18:35:32.359071   66096 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-183258' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-183258/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-183258' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1031 18:35:32.505476   66096 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I1031 18:35:32.505510   66096 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/15242-42743/.minikube CaCertPath:/home/jenkins/minikube-integration/15242-42743/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/15242-42743/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/15242-42743/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/15242-42743/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/15242-42743/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/15242-42743/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/15242-42743/.minikube}
	I1031 18:35:32.505554   66096 buildroot.go:174] setting up certificates
	I1031 18:35:32.505566   66096 provision.go:83] configureAuth start
	I1031 18:35:32.505579   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) Calling .GetMachineName
	I1031 18:35:32.505972   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) Calling .GetIP
	I1031 18:35:32.509384   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) DBG | domain kubernetes-upgrade-183258 has defined MAC address 52:54:00:93:dd:ed in network mk-kubernetes-upgrade-183258
	I1031 18:35:32.509848   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:dd:ed", ip: ""} in network mk-kubernetes-upgrade-183258: {Iface:virbr1 ExpiryTime:2022-10-31 19:33:12 +0000 UTC Type:0 Mac:52:54:00:93:dd:ed Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:kubernetes-upgrade-183258 Clientid:01:52:54:00:93:dd:ed}
	I1031 18:35:32.509886   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) DBG | domain kubernetes-upgrade-183258 has defined IP address 192.168.39.232 and MAC address 52:54:00:93:dd:ed in network mk-kubernetes-upgrade-183258
	I1031 18:35:32.510285   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) Calling .GetSSHHostname
	I1031 18:35:32.512944   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) DBG | domain kubernetes-upgrade-183258 has defined MAC address 52:54:00:93:dd:ed in network mk-kubernetes-upgrade-183258
	I1031 18:35:32.513351   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:dd:ed", ip: ""} in network mk-kubernetes-upgrade-183258: {Iface:virbr1 ExpiryTime:2022-10-31 19:33:12 +0000 UTC Type:0 Mac:52:54:00:93:dd:ed Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:kubernetes-upgrade-183258 Clientid:01:52:54:00:93:dd:ed}
	I1031 18:35:32.513396   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) DBG | domain kubernetes-upgrade-183258 has defined IP address 192.168.39.232 and MAC address 52:54:00:93:dd:ed in network mk-kubernetes-upgrade-183258
	I1031 18:35:32.513626   66096 provision.go:138] copyHostCerts
	I1031 18:35:32.513692   66096 exec_runner.go:144] found /home/jenkins/minikube-integration/15242-42743/.minikube/ca.pem, removing ...
	I1031 18:35:32.513711   66096 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15242-42743/.minikube/ca.pem
	I1031 18:35:32.513771   66096 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15242-42743/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/15242-42743/.minikube/ca.pem (1078 bytes)
	I1031 18:35:32.513877   66096 exec_runner.go:144] found /home/jenkins/minikube-integration/15242-42743/.minikube/cert.pem, removing ...
	I1031 18:35:32.513889   66096 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15242-42743/.minikube/cert.pem
	I1031 18:35:32.513952   66096 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15242-42743/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/15242-42743/.minikube/cert.pem (1123 bytes)
	I1031 18:35:32.514020   66096 exec_runner.go:144] found /home/jenkins/minikube-integration/15242-42743/.minikube/key.pem, removing ...
	I1031 18:35:32.514033   66096 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15242-42743/.minikube/key.pem
	I1031 18:35:32.514061   66096 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15242-42743/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/15242-42743/.minikube/key.pem (1675 bytes)
	I1031 18:35:32.514120   66096 provision.go:112] generating server cert: /home/jenkins/minikube-integration/15242-42743/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/15242-42743/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/15242-42743/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-183258 san=[192.168.39.232 192.168.39.232 localhost 127.0.0.1 minikube kubernetes-upgrade-183258]
	I1031 18:35:32.774490   66096 provision.go:172] copyRemoteCerts
	I1031 18:35:32.774866   66096 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1031 18:35:32.774913   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) Calling .GetSSHHostname
	I1031 18:35:32.779602   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) DBG | domain kubernetes-upgrade-183258 has defined MAC address 52:54:00:93:dd:ed in network mk-kubernetes-upgrade-183258
	I1031 18:35:32.780375   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:dd:ed", ip: ""} in network mk-kubernetes-upgrade-183258: {Iface:virbr1 ExpiryTime:2022-10-31 19:33:12 +0000 UTC Type:0 Mac:52:54:00:93:dd:ed Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:kubernetes-upgrade-183258 Clientid:01:52:54:00:93:dd:ed}
	I1031 18:35:32.780469   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) DBG | domain kubernetes-upgrade-183258 has defined IP address 192.168.39.232 and MAC address 52:54:00:93:dd:ed in network mk-kubernetes-upgrade-183258
	I1031 18:35:32.780746   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) Calling .GetSSHPort
	I1031 18:35:32.781043   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) Calling .GetSSHKeyPath
	I1031 18:35:32.781207   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) Calling .GetSSHUsername
	I1031 18:35:32.781358   66096 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15242-42743/.minikube/machines/kubernetes-upgrade-183258/id_rsa Username:docker}
	I1031 18:35:32.894096   66096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15242-42743/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1031 18:35:32.926015   66096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15242-42743/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1031 18:35:32.961546   66096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15242-42743/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1031 18:35:32.993422   66096 provision.go:86] duration metric: configureAuth took 487.840281ms
	I1031 18:35:32.993456   66096 buildroot.go:189] setting minikube options for container-runtime
	I1031 18:35:32.993718   66096 config.go:180] Loaded profile config "kubernetes-upgrade-183258": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1031 18:35:32.993759   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) Calling .DriverName
	I1031 18:35:32.994094   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) Calling .GetSSHHostname
	I1031 18:35:32.997352   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) DBG | domain kubernetes-upgrade-183258 has defined MAC address 52:54:00:93:dd:ed in network mk-kubernetes-upgrade-183258
	I1031 18:35:32.997893   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:dd:ed", ip: ""} in network mk-kubernetes-upgrade-183258: {Iface:virbr1 ExpiryTime:2022-10-31 19:33:12 +0000 UTC Type:0 Mac:52:54:00:93:dd:ed Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:kubernetes-upgrade-183258 Clientid:01:52:54:00:93:dd:ed}
	I1031 18:35:32.997971   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) DBG | domain kubernetes-upgrade-183258 has defined IP address 192.168.39.232 and MAC address 52:54:00:93:dd:ed in network mk-kubernetes-upgrade-183258
	I1031 18:35:32.998216   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) Calling .GetSSHPort
	I1031 18:35:32.998431   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) Calling .GetSSHKeyPath
	I1031 18:35:32.998639   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) Calling .GetSSHKeyPath
	I1031 18:35:32.998807   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) Calling .GetSSHUsername
	I1031 18:35:32.999025   66096 main.go:134] libmachine: Using SSH client type: native
	I1031 18:35:32.999187   66096 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ed4e0] 0x7f0660 <nil>  [] 0s} 192.168.39.232 22 <nil> <nil>}
	I1031 18:35:32.999208   66096 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1031 18:35:33.146628   66096 main.go:134] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1031 18:35:33.146657   66096 buildroot.go:70] root file system type: tmpfs
	I1031 18:35:33.146843   66096 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1031 18:35:33.146873   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) Calling .GetSSHHostname
	I1031 18:35:33.150479   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) DBG | domain kubernetes-upgrade-183258 has defined MAC address 52:54:00:93:dd:ed in network mk-kubernetes-upgrade-183258
	I1031 18:35:33.150913   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:dd:ed", ip: ""} in network mk-kubernetes-upgrade-183258: {Iface:virbr1 ExpiryTime:2022-10-31 19:33:12 +0000 UTC Type:0 Mac:52:54:00:93:dd:ed Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:kubernetes-upgrade-183258 Clientid:01:52:54:00:93:dd:ed}
	I1031 18:35:33.150986   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) DBG | domain kubernetes-upgrade-183258 has defined IP address 192.168.39.232 and MAC address 52:54:00:93:dd:ed in network mk-kubernetes-upgrade-183258
	I1031 18:35:33.151295   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) Calling .GetSSHPort
	I1031 18:35:33.151502   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) Calling .GetSSHKeyPath
	I1031 18:35:33.151724   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) Calling .GetSSHKeyPath
	I1031 18:35:33.151937   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) Calling .GetSSHUsername
	I1031 18:35:33.152131   66096 main.go:134] libmachine: Using SSH client type: native
	I1031 18:35:33.152304   66096 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ed4e0] 0x7f0660 <nil>  [] 0s} 192.168.39.232 22 <nil> <nil>}
	I1031 18:35:33.152407   66096 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1031 18:35:33.314469   66096 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1031 18:35:33.314511   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) Calling .GetSSHHostname
	I1031 18:35:33.317981   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) DBG | domain kubernetes-upgrade-183258 has defined MAC address 52:54:00:93:dd:ed in network mk-kubernetes-upgrade-183258
	I1031 18:35:33.318445   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:dd:ed", ip: ""} in network mk-kubernetes-upgrade-183258: {Iface:virbr1 ExpiryTime:2022-10-31 19:33:12 +0000 UTC Type:0 Mac:52:54:00:93:dd:ed Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:kubernetes-upgrade-183258 Clientid:01:52:54:00:93:dd:ed}
	I1031 18:35:33.318488   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) DBG | domain kubernetes-upgrade-183258 has defined IP address 192.168.39.232 and MAC address 52:54:00:93:dd:ed in network mk-kubernetes-upgrade-183258
	I1031 18:35:33.318721   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) Calling .GetSSHPort
	I1031 18:35:33.319008   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) Calling .GetSSHKeyPath
	I1031 18:35:33.319316   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) Calling .GetSSHKeyPath
	I1031 18:35:33.319518   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) Calling .GetSSHUsername
	I1031 18:35:33.319700   66096 main.go:134] libmachine: Using SSH client type: native
	I1031 18:35:33.319850   66096 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ed4e0] 0x7f0660 <nil>  [] 0s} 192.168.39.232 22 <nil> <nil>}
	I1031 18:35:33.319875   66096 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1031 18:35:33.466468   66096 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I1031 18:35:33.466511   66096 machine.go:91] provisioned docker machine in 1.270917739s
	I1031 18:35:33.466524   66096 start.go:300] post-start starting for "kubernetes-upgrade-183258" (driver="kvm2")
	I1031 18:35:33.466533   66096 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1031 18:35:33.466559   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) Calling .DriverName
	I1031 18:35:33.466880   66096 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1031 18:35:33.466930   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) Calling .GetSSHHostname
	I1031 18:35:33.470346   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) DBG | domain kubernetes-upgrade-183258 has defined MAC address 52:54:00:93:dd:ed in network mk-kubernetes-upgrade-183258
	I1031 18:35:33.470769   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:dd:ed", ip: ""} in network mk-kubernetes-upgrade-183258: {Iface:virbr1 ExpiryTime:2022-10-31 19:33:12 +0000 UTC Type:0 Mac:52:54:00:93:dd:ed Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:kubernetes-upgrade-183258 Clientid:01:52:54:00:93:dd:ed}
	I1031 18:35:33.470808   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) DBG | domain kubernetes-upgrade-183258 has defined IP address 192.168.39.232 and MAC address 52:54:00:93:dd:ed in network mk-kubernetes-upgrade-183258
	I1031 18:35:33.471024   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) Calling .GetSSHPort
	I1031 18:35:33.471252   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) Calling .GetSSHKeyPath
	I1031 18:35:33.471444   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) Calling .GetSSHUsername
	I1031 18:35:33.471635   66096 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15242-42743/.minikube/machines/kubernetes-upgrade-183258/id_rsa Username:docker}
	I1031 18:35:33.571386   66096 ssh_runner.go:195] Run: cat /etc/os-release
	I1031 18:35:33.575986   66096 info.go:137] Remote host: Buildroot 2021.02.12
	I1031 18:35:33.576021   66096 filesync.go:126] Scanning /home/jenkins/minikube-integration/15242-42743/.minikube/addons for local assets ...
	I1031 18:35:33.576097   66096 filesync.go:126] Scanning /home/jenkins/minikube-integration/15242-42743/.minikube/files for local assets ...
	I1031 18:35:33.576199   66096 filesync.go:149] local asset: /home/jenkins/minikube-integration/15242-42743/.minikube/files/etc/ssl/certs/495292.pem -> 495292.pem in /etc/ssl/certs
	I1031 18:35:33.576304   66096 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1031 18:35:33.588125   66096 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15242-42743/.minikube/files/etc/ssl/certs/495292.pem --> /etc/ssl/certs/495292.pem (1708 bytes)
	I1031 18:35:33.619971   66096 start.go:303] post-start completed in 153.431477ms
	I1031 18:35:33.620000   66096 fix.go:57] fixHost completed within 1.453720809s
	I1031 18:35:33.620029   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) Calling .GetSSHHostname
	I1031 18:35:33.623346   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) DBG | domain kubernetes-upgrade-183258 has defined MAC address 52:54:00:93:dd:ed in network mk-kubernetes-upgrade-183258
	I1031 18:35:33.623772   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:dd:ed", ip: ""} in network mk-kubernetes-upgrade-183258: {Iface:virbr1 ExpiryTime:2022-10-31 19:33:12 +0000 UTC Type:0 Mac:52:54:00:93:dd:ed Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:kubernetes-upgrade-183258 Clientid:01:52:54:00:93:dd:ed}
	I1031 18:35:33.623815   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) DBG | domain kubernetes-upgrade-183258 has defined IP address 192.168.39.232 and MAC address 52:54:00:93:dd:ed in network mk-kubernetes-upgrade-183258
	I1031 18:35:33.624190   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) Calling .GetSSHPort
	I1031 18:35:33.624407   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) Calling .GetSSHKeyPath
	I1031 18:35:33.624576   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) Calling .GetSSHKeyPath
	I1031 18:35:33.624788   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) Calling .GetSSHUsername
	I1031 18:35:33.624999   66096 main.go:134] libmachine: Using SSH client type: native
	I1031 18:35:33.625211   66096 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ed4e0] 0x7f0660 <nil>  [] 0s} 192.168.39.232 22 <nil> <nil>}
	I1031 18:35:33.625235   66096 main.go:134] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1031 18:35:33.820237   66096 main.go:134] libmachine: SSH cmd err, output: <nil>: 1667241333.785620347
	
	I1031 18:35:33.820270   66096 fix.go:207] guest clock: 1667241333.785620347
	I1031 18:35:33.820282   66096 fix.go:220] Guest: 2022-10-31 18:35:33.785620347 +0000 UTC Remote: 2022-10-31 18:35:33.620005073 +0000 UTC m=+1.804122754 (delta=165.615274ms)
	I1031 18:35:33.820307   66096 fix.go:191] guest clock delta is within tolerance: 165.615274ms
	I1031 18:35:33.820315   66096 start.go:83] releasing machines lock for "kubernetes-upgrade-183258", held for 1.654048416s
	I1031 18:35:33.820361   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) Calling .DriverName
	I1031 18:35:33.820744   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) Calling .GetIP
	I1031 18:35:33.824345   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) DBG | domain kubernetes-upgrade-183258 has defined MAC address 52:54:00:93:dd:ed in network mk-kubernetes-upgrade-183258
	I1031 18:35:33.824871   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:dd:ed", ip: ""} in network mk-kubernetes-upgrade-183258: {Iface:virbr1 ExpiryTime:2022-10-31 19:33:12 +0000 UTC Type:0 Mac:52:54:00:93:dd:ed Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:kubernetes-upgrade-183258 Clientid:01:52:54:00:93:dd:ed}
	I1031 18:35:33.824909   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) DBG | domain kubernetes-upgrade-183258 has defined IP address 192.168.39.232 and MAC address 52:54:00:93:dd:ed in network mk-kubernetes-upgrade-183258
	I1031 18:35:33.825086   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) Calling .DriverName
	I1031 18:35:33.825550   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) Calling .DriverName
	I1031 18:35:33.825708   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) Calling .DriverName
	I1031 18:35:33.825783   66096 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1031 18:35:33.825833   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) Calling .GetSSHHostname
	I1031 18:35:33.826190   66096 ssh_runner.go:195] Run: systemctl --version
	I1031 18:35:33.826221   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) Calling .GetSSHHostname
	I1031 18:35:33.829455   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) DBG | domain kubernetes-upgrade-183258 has defined MAC address 52:54:00:93:dd:ed in network mk-kubernetes-upgrade-183258
	I1031 18:35:33.829695   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) DBG | domain kubernetes-upgrade-183258 has defined MAC address 52:54:00:93:dd:ed in network mk-kubernetes-upgrade-183258
	I1031 18:35:33.830111   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:dd:ed", ip: ""} in network mk-kubernetes-upgrade-183258: {Iface:virbr1 ExpiryTime:2022-10-31 19:33:12 +0000 UTC Type:0 Mac:52:54:00:93:dd:ed Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:kubernetes-upgrade-183258 Clientid:01:52:54:00:93:dd:ed}
	I1031 18:35:33.830170   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:dd:ed", ip: ""} in network mk-kubernetes-upgrade-183258: {Iface:virbr1 ExpiryTime:2022-10-31 19:33:12 +0000 UTC Type:0 Mac:52:54:00:93:dd:ed Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:kubernetes-upgrade-183258 Clientid:01:52:54:00:93:dd:ed}
	I1031 18:35:33.830215   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) DBG | domain kubernetes-upgrade-183258 has defined IP address 192.168.39.232 and MAC address 52:54:00:93:dd:ed in network mk-kubernetes-upgrade-183258
	I1031 18:35:33.830243   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) DBG | domain kubernetes-upgrade-183258 has defined IP address 192.168.39.232 and MAC address 52:54:00:93:dd:ed in network mk-kubernetes-upgrade-183258
	I1031 18:35:33.830476   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) Calling .GetSSHPort
	I1031 18:35:33.830685   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) Calling .GetSSHPort
	I1031 18:35:33.830690   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) Calling .GetSSHKeyPath
	I1031 18:35:33.830902   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) Calling .GetSSHKeyPath
	I1031 18:35:33.830989   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) Calling .GetSSHUsername
	I1031 18:35:33.831095   66096 main.go:134] libmachine: (kubernetes-upgrade-183258) Calling .GetSSHUsername
	I1031 18:35:33.831202   66096 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15242-42743/.minikube/machines/kubernetes-upgrade-183258/id_rsa Username:docker}
	I1031 18:35:33.831310   66096 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15242-42743/.minikube/machines/kubernetes-upgrade-183258/id_rsa Username:docker}
	I1031 18:35:33.968321   66096 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I1031 18:35:33.968511   66096 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1031 18:35:34.001728   66096 docker.go:613] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.25.3
	registry.k8s.io/kube-scheduler:v1.25.3
	registry.k8s.io/kube-controller-manager:v1.25.3
	registry.k8s.io/kube-proxy:v1.25.3
	registry.k8s.io/pause:3.8
	registry.k8s.io/etcd:3.5.4-0
	registry.k8s.io/coredns/coredns:v1.9.3
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I1031 18:35:34.001749   66096 docker.go:543] Images already preloaded, skipping extraction
	I1031 18:35:34.001818   66096 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1031 18:35:34.015346   66096 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1031 18:35:34.028968   66096 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1031 18:35:34.041869   66096 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1031 18:35:34.065099   66096 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1031 18:35:34.231538   66096 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1031 18:35:34.398118   66096 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1031 18:35:34.613712   66096 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1031 18:35:46.650394   66096 ssh_runner.go:235] Completed: sudo systemctl restart docker: (12.036593604s)
	I1031 18:35:46.654224   66096 out.go:177] 
	W1031 18:35:46.656183   66096 out.go:239] X Exiting due to RUNTIME_ENABLE: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	W1031 18:35:46.656205   66096 out.go:239] * 
	W1031 18:35:46.657049   66096 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1031 18:35:46.674844   66096 out.go:177] 
	
	* 
	* ==> Docker <==
	* -- Journal begins at Mon 2022-10-31 18:34:49 UTC, ends at Mon 2022-10-31 18:35:47 UTC. --
	Oct 31 18:35:47 kubernetes-upgrade-183258 systemd[1]: docker.service: Unit process 3004 (containerd-shim) remains running after unit stopped.
	Oct 31 18:35:47 kubernetes-upgrade-183258 systemd[1]: docker.service: Unit process 3061 (containerd-shim) remains running after unit stopped.
	Oct 31 18:35:47 kubernetes-upgrade-183258 systemd[1]: docker.service: Unit process 3183 (containerd-shim) remains running after unit stopped.
	Oct 31 18:35:47 kubernetes-upgrade-183258 systemd[1]: docker.service: Unit process 3202 (containerd-shim) remains running after unit stopped.
	Oct 31 18:35:47 kubernetes-upgrade-183258 systemd[1]: docker.service: Unit process 3250 (containerd-shim) remains running after unit stopped.
	Oct 31 18:35:47 kubernetes-upgrade-183258 systemd[1]: docker.service: Unit process 3306 (containerd-shim) remains running after unit stopped.
	Oct 31 18:35:47 kubernetes-upgrade-183258 systemd[1]: docker.service: Unit process 3362 (containerd-shim) remains running after unit stopped.
	Oct 31 18:35:47 kubernetes-upgrade-183258 systemd[1]: docker.service: Unit process 3479 (containerd-shim) remains running after unit stopped.
	Oct 31 18:35:47 kubernetes-upgrade-183258 systemd[1]: docker.service: Unit process 3558 (containerd-shim) remains running after unit stopped.
	Oct 31 18:35:47 kubernetes-upgrade-183258 systemd[1]: Failed to start Docker Application Container Engine.
	Oct 31 18:35:47 kubernetes-upgrade-183258 systemd[1]: docker.service: Scheduled restart job, restart counter is at 3.
	Oct 31 18:35:47 kubernetes-upgrade-183258 systemd[1]: Stopped Docker Application Container Engine.
	Oct 31 18:35:47 kubernetes-upgrade-183258 systemd[1]: docker.service: Start request repeated too quickly.
	Oct 31 18:35:47 kubernetes-upgrade-183258 systemd[1]: docker.service: Failed with result 'exit-code'.
	Oct 31 18:35:47 kubernetes-upgrade-183258 systemd[1]: docker.service: Unit process 2986 (containerd-shim) remains running after unit stopped.
	Oct 31 18:35:47 kubernetes-upgrade-183258 systemd[1]: docker.service: Unit process 3004 (containerd-shim) remains running after unit stopped.
	Oct 31 18:35:47 kubernetes-upgrade-183258 systemd[1]: docker.service: Unit process 3061 (containerd-shim) remains running after unit stopped.
	Oct 31 18:35:47 kubernetes-upgrade-183258 systemd[1]: docker.service: Unit process 3183 (containerd-shim) remains running after unit stopped.
	Oct 31 18:35:47 kubernetes-upgrade-183258 systemd[1]: docker.service: Unit process 3202 (containerd-shim) remains running after unit stopped.
	Oct 31 18:35:47 kubernetes-upgrade-183258 systemd[1]: docker.service: Unit process 3250 (containerd-shim) remains running after unit stopped.
	Oct 31 18:35:47 kubernetes-upgrade-183258 systemd[1]: docker.service: Unit process 3306 (containerd-shim) remains running after unit stopped.
	Oct 31 18:35:47 kubernetes-upgrade-183258 systemd[1]: docker.service: Unit process 3362 (containerd-shim) remains running after unit stopped.
	Oct 31 18:35:47 kubernetes-upgrade-183258 systemd[1]: docker.service: Unit process 3479 (containerd-shim) remains running after unit stopped.
	Oct 31 18:35:47 kubernetes-upgrade-183258 systemd[1]: docker.service: Unit process 3558 (containerd-shim) remains running after unit stopped.
	Oct 31 18:35:47 kubernetes-upgrade-183258 systemd[1]: Failed to start Docker Application Container Engine.
	
	* 
	* ==> container status <==
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* [  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.063696] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.539532] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.908976] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.130919] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.403167] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.271548] systemd-fstab-generator[511]: Ignoring "noauto" for root device
	[  +0.123033] systemd-fstab-generator[522]: Ignoring "noauto" for root device
	[Oct31 18:35] systemd-fstab-generator[732]: Ignoring "noauto" for root device
	[  +4.686377] kauditd_printk_skb: 28 callbacks suppressed
	[  +0.376508] systemd-fstab-generator[901]: Ignoring "noauto" for root device
	[  +0.150999] systemd-fstab-generator[912]: Ignoring "noauto" for root device
	[  +0.111383] systemd-fstab-generator[923]: Ignoring "noauto" for root device
	[  +1.570891] systemd-fstab-generator[1080]: Ignoring "noauto" for root device
	[  +0.121325] systemd-fstab-generator[1091]: Ignoring "noauto" for root device
	[  +2.227794] systemd-fstab-generator[1324]: Ignoring "noauto" for root device
	[  +0.443486] kauditd_printk_skb: 72 callbacks suppressed
	[ +14.100135] kauditd_printk_skb: 4 callbacks suppressed
	[  +6.481105] systemd-fstab-generator[2590]: Ignoring "noauto" for root device
	[  +0.182541] systemd-fstab-generator[2601]: Ignoring "noauto" for root device
	[  +0.153579] systemd-fstab-generator[2612]: Ignoring "noauto" for root device
	[  +1.135537] kauditd_printk_skb: 6 callbacks suppressed
	
	* 
	* ==> kernel <==
	*  18:35:47 up 1 min,  0 users,  load average: 0.84, 0.26, 0.09
	Linux kubernetes-upgrade-183258 5.10.57 #1 SMP Wed Oct 19 23:03:20 UTC 2022 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Mon 2022-10-31 18:34:49 UTC, ends at Mon 2022-10-31 18:35:47 UTC. --
	Oct 31 18:35:45 kubernetes-upgrade-183258 kubelet[1330]: I1031 18:35:45.457920    1330 scope.go:115] "RemoveContainer" containerID="6963d37bd9be9d717f6145796f1f477bc4cb4149dfc8196dd7b31e2780d79f27"
	Oct 31 18:35:45 kubernetes-upgrade-183258 kubelet[1330]: I1031 18:35:45.458315    1330 status_manager.go:667] "Failed to get status for pod" podUID=2b364e6f-4e7f-4aa4-8354-c39032133228 pod="kube-system/coredns-5644d7b6d9-vgbhp" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/coredns-5644d7b6d9-vgbhp\": dial tcp 192.168.39.232:8443: connect: connection refused"
	Oct 31 18:35:45 kubernetes-upgrade-183258 kubelet[1330]: I1031 18:35:45.568212    1330 scope.go:115] "RemoveContainer" containerID="7867fc99f71d517caa483a9b3269954e360270c55d6ecc8b3dadc9fac243d202"
	Oct 31 18:35:45 kubernetes-upgrade-183258 kubelet[1330]: E1031 18:35:45.629947    1330 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubernetes-upgrade-183258\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/kubernetes-upgrade-183258?resourceVersion=0&timeout=10s\": dial tcp 192.168.39.232:8443: connect: connection refused"
	Oct 31 18:35:45 kubernetes-upgrade-183258 kubelet[1330]: E1031 18:35:45.630282    1330 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubernetes-upgrade-183258\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/kubernetes-upgrade-183258?timeout=10s\": dial tcp 192.168.39.232:8443: connect: connection refused"
	Oct 31 18:35:45 kubernetes-upgrade-183258 kubelet[1330]: E1031 18:35:45.630800    1330 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubernetes-upgrade-183258\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/kubernetes-upgrade-183258?timeout=10s\": dial tcp 192.168.39.232:8443: connect: connection refused"
	Oct 31 18:35:45 kubernetes-upgrade-183258 kubelet[1330]: E1031 18:35:45.631183    1330 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubernetes-upgrade-183258\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/kubernetes-upgrade-183258?timeout=10s\": dial tcp 192.168.39.232:8443: connect: connection refused"
	Oct 31 18:35:45 kubernetes-upgrade-183258 kubelet[1330]: E1031 18:35:45.631580    1330 kubelet_node_status.go:460] "Error updating node status, will retry" err="error getting node \"kubernetes-upgrade-183258\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/kubernetes-upgrade-183258?timeout=10s\": dial tcp 192.168.39.232:8443: connect: connection refused"
	Oct 31 18:35:45 kubernetes-upgrade-183258 kubelet[1330]: E1031 18:35:45.631600    1330 kubelet_node_status.go:447] "Unable to update node status" err="update node status exceeds retry count"
	Oct 31 18:35:45 kubernetes-upgrade-183258 kubelet[1330]: I1031 18:35:45.791866    1330 scope.go:115] "RemoveContainer" containerID="9a931c3b3f0f69afc7f3e111205103a86a2b16576ee47abdb2bf40b51e444176"
	Oct 31 18:35:45 kubernetes-upgrade-183258 kubelet[1330]: E1031 18:35:45.792443    1330 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 10s restarting failed container=coredns pod=coredns-5644d7b6d9-vgbhp_kube-system(2b364e6f-4e7f-4aa4-8354-c39032133228)\"" pod="kube-system/coredns-5644d7b6d9-vgbhp" podUID=2b364e6f-4e7f-4aa4-8354-c39032133228
	Oct 31 18:35:45 kubernetes-upgrade-183258 kubelet[1330]: I1031 18:35:45.798016    1330 status_manager.go:667] "Failed to get status for pod" podUID=0d2e6a231072c6421a798f50631be0e3 pod="kube-system/kube-apiserver-kubernetes-upgrade-183258" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-kubernetes-upgrade-183258\": dial tcp 192.168.39.232:8443: connect: connection refused"
	Oct 31 18:35:45 kubernetes-upgrade-183258 kubelet[1330]: E1031 18:35:45.895045    1330 remote_runtime.go:233] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to start sandbox container for pod \"kube-apiserver-kubernetes-upgrade-183258\": Error response from daemon: failed to update store for object type *libnetwork.endpoint: open : no such file or directory"
	Oct 31 18:35:45 kubernetes-upgrade-183258 kubelet[1330]: E1031 18:35:45.895101    1330 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to start sandbox container for pod \"kube-apiserver-kubernetes-upgrade-183258\": Error response from daemon: failed to update store for object type *libnetwork.endpoint: open : no such file or directory" pod="kube-system/kube-apiserver-kubernetes-upgrade-183258"
	Oct 31 18:35:45 kubernetes-upgrade-183258 kubelet[1330]: E1031 18:35:45.895226    1330 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to start sandbox container for pod \"kube-apiserver-kubernetes-upgrade-183258\": Error response from daemon: failed to update store for object type *libnetwork.endpoint: open : no such file or directory" pod="kube-system/kube-apiserver-kubernetes-upgrade-183258"
	Oct 31 18:35:45 kubernetes-upgrade-183258 kubelet[1330]: E1031 18:35:45.895325    1330 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-apiserver-kubernetes-upgrade-183258_kube-system(0d2e6a231072c6421a798f50631be0e3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-apiserver-kubernetes-upgrade-183258_kube-system(0d2e6a231072c6421a798f50631be0e3)\\\": rpc error: code = Unknown desc = failed to start sandbox container for pod \\\"kube-apiserver-kubernetes-upgrade-183258\\\": Error response from daemon: failed to update store for object type *libnetwork.endpoint: open : no such file or directory\"" pod="kube-system/kube-apiserver-kubernetes-upgrade-183258" podUID=0d2e6a231072c6421a798f50631be0e3
	Oct 31 18:35:46 kubernetes-upgrade-183258 kubelet[1330]: E1031 18:35:46.857454    1330 remote_image.go:294] "ImageFsInfo from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.40/info\": read unix @->/run/docker.sock: read: connection reset by peer"
	Oct 31 18:35:46 kubernetes-upgrade-183258 kubelet[1330]: E1031 18:35:46.857483    1330 eviction_manager.go:256] "Eviction manager: failed to get summary stats" err="failed to get imageFs stats: rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.40/info\": read unix @->/run/docker.sock: read: connection reset by peer"
	Oct 31 18:35:46 kubernetes-upgrade-183258 kubelet[1330]: E1031 18:35:46.857578    1330 remote_runtime.go:377] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.40/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)&limit=0\": read unix @->/run/docker.sock: read: connection reset by peer" filter="nil"
	Oct 31 18:35:46 kubernetes-upgrade-183258 kubelet[1330]: E1031 18:35:46.858587    1330 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.40/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)&limit=0\": read unix @->/run/docker.sock: read: connection reset by peer"
	Oct 31 18:35:46 kubernetes-upgrade-183258 kubelet[1330]: E1031 18:35:46.858622    1330 generic.go:205] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.40/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)&limit=0\": read unix @->/run/docker.sock: read: connection reset by peer"
	Oct 31 18:35:47 kubernetes-upgrade-183258 kubelet[1330]: E1031 18:35:47.597891    1330 remote_runtime.go:377] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},}"
	Oct 31 18:35:47 kubernetes-upgrade-183258 kubelet[1330]: E1031 18:35:47.597987    1330 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Oct 31 18:35:47 kubernetes-upgrade-183258 kubelet[1330]: E1031 18:35:47.598009    1330 kubelet_pods.go:1124] "Error listing containers" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Oct 31 18:35:47 kubernetes-upgrade-183258 kubelet[1330]: E1031 18:35:47.598027    1330 kubelet.go:2186] "Failed cleaning pods" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1031 18:35:47.421163   66301 logs.go:271] Failed to list containers for "kube-apiserver": docker: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Get "http://%2Fvar%2Frun%2Fdocker.sock/v1.24/containers/json?all=1&filters=%7B%22name%22%3A%7B%22k8s_kube-apiserver%22%3Atrue%7D%7D": dial unix /var/run/docker.sock: connect: permission denied
	E1031 18:35:47.447152   66301 logs.go:271] Failed to list containers for "etcd": docker: docker ps -a --filter=name=k8s_etcd --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E1031 18:35:47.475290   66301 logs.go:271] Failed to list containers for "coredns": docker: docker ps -a --filter=name=k8s_coredns --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E1031 18:35:47.501365   66301 logs.go:271] Failed to list containers for "kube-scheduler": docker: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E1031 18:35:47.527678   66301 logs.go:271] Failed to list containers for "kube-proxy": docker: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E1031 18:35:47.557049   66301 logs.go:271] Failed to list containers for "kubernetes-dashboard": docker: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E1031 18:35:47.583753   66301 logs.go:271] Failed to list containers for "storage-provisioner": docker: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E1031 18:35:47.610147   66301 logs.go:271] Failed to list containers for "kube-controller-manager": docker: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E1031 18:35:47.717257   66301 logs.go:192] command /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" failed with error: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": Process exited with status 1
	stdout:
	
	stderr:
	time="2022-10-31T18:35:47Z" level=fatal msg="listing containers: rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	 output: "\n** stderr ** \ntime=\"2022-10-31T18:35:47Z\" level=fatal msg=\"listing containers: rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?\"\nCannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?\n\n** /stderr **"
	E1031 18:35:47.806077   66301 logs.go:192] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: "\n** stderr ** \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **"
	! unable to fetch logs for: container status, describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-183258 -n kubernetes-upgrade-183258
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-183258 -n kubernetes-upgrade-183258: exit status 2 (312.281806ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "kubernetes-upgrade-183258" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-183258" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-183258

                                                
                                                
=== CONT  TestKubernetesUpgrade
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-183258: (2.242664253s)
--- FAIL: TestKubernetesUpgrade (172.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (60.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:238: (dbg) Run:  kubectl --context kubenet-183258 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/HairPin
net_test.go:238: (dbg) Non-zero exit: kubectl --context kubenet-183258 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.202835s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E1031 18:48:18.633772   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/kindnet-183258/client.crt: no such file or directory
net_test.go:238: (dbg) Run:  kubectl --context kubenet-183258 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E1031 18:48:21.940186   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/functional-174543/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/HairPin
net_test.go:238: (dbg) Non-zero exit: kubectl --context kubenet-183258 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.198728027s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:238: (dbg) Run:  kubectl --context kubenet-183258 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E1031 18:48:28.874917   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/kindnet-183258/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/HairPin
net_test.go:238: (dbg) Non-zero exit: kubectl --context kubenet-183258 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.191666843s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/HairPin
net_test.go:238: (dbg) Run:  kubectl --context kubenet-183258 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
net_test.go:238: (dbg) Non-zero exit: kubectl --context kubenet-183258 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.20745795s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:238: (dbg) Run:  kubectl --context kubenet-183258 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
net_test.go:238: (dbg) Non-zero exit: kubectl --context kubenet-183258 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.152984198s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E1031 18:48:49.355319   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/kindnet-183258/client.crt: no such file or directory
net_test.go:238: (dbg) Run:  kubectl --context kubenet-183258 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
net_test.go:238: (dbg) Non-zero exit: kubectl --context kubenet-183258 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.163631298s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:238: (dbg) Run:  kubectl --context kubenet-183258 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E1031 18:49:08.904300   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/cilium-183258/client.crt: no such file or directory
E1031 18:49:08.909584   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/cilium-183258/client.crt: no such file or directory
E1031 18:49:08.919875   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/cilium-183258/client.crt: no such file or directory
E1031 18:49:08.940227   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/cilium-183258/client.crt: no such file or directory
E1031 18:49:08.981404   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/cilium-183258/client.crt: no such file or directory
E1031 18:49:09.061562   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/cilium-183258/client.crt: no such file or directory
E1031 18:49:09.222260   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/cilium-183258/client.crt: no such file or directory
E1031 18:49:09.542869   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/cilium-183258/client.crt: no such file or directory
E1031 18:49:10.183373   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/cilium-183258/client.crt: no such file or directory
E1031 18:49:11.463533   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/cilium-183258/client.crt: no such file or directory
net_test.go:238: (dbg) Non-zero exit: kubectl --context kubenet-183258 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.154305954s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:243: failed to connect via pod host: exit status 1
--- FAIL: TestNetworkPlugins/group/kubenet/HairPin (60.15s)

                                                
                                    

Test pass (277/306)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 9.74
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.09
10 TestDownloadOnly/v1.25.3/json-events 5.1
11 TestDownloadOnly/v1.25.3/preload-exists 0
15 TestDownloadOnly/v1.25.3/LogsDuration 0.08
16 TestDownloadOnly/DeleteAll 0.18
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.17
19 TestBinaryMirror 0.56
20 TestOffline 95.77
22 TestAddons/Setup 146.17
24 TestAddons/parallel/Registry 22.65
25 TestAddons/parallel/Ingress 28.04
26 TestAddons/parallel/MetricsServer 5.68
27 TestAddons/parallel/HelmTiller 18.7
29 TestAddons/parallel/CSI 50.91
30 TestAddons/parallel/Headlamp 11.21
31 TestAddons/parallel/CloudSpanner 5.49
33 TestAddons/serial/GCPAuth 44.62
34 TestAddons/StoppedEnableDisable 4.4
35 TestCertOptions 75
36 TestCertExpiration 311.86
37 TestDockerFlags 115.42
38 TestForceSystemdFlag 110.09
39 TestForceSystemdEnv 75.96
40 TestKVMDriverInstallOrUpdate 15.18
44 TestErrorSpam/setup 53.08
45 TestErrorSpam/start 0.4
46 TestErrorSpam/status 0.81
47 TestErrorSpam/pause 1.34
48 TestErrorSpam/unpause 1.42
49 TestErrorSpam/stop 12.53
52 TestFunctional/serial/CopySyncFile 0
53 TestFunctional/serial/StartWithProxy 69.93
54 TestFunctional/serial/AuditLog 0
55 TestFunctional/serial/SoftStart 37.52
56 TestFunctional/serial/KubeContext 0.04
57 TestFunctional/serial/KubectlGetPods 0.08
60 TestFunctional/serial/CacheCmd/cache/add_remote 4.27
61 TestFunctional/serial/CacheCmd/cache/add_local 1.57
62 TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 0.08
63 TestFunctional/serial/CacheCmd/cache/list 0.07
64 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.24
65 TestFunctional/serial/CacheCmd/cache/cache_reload 1.62
66 TestFunctional/serial/CacheCmd/cache/delete 0.14
67 TestFunctional/serial/MinikubeKubectlCmd 0.13
68 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.13
69 TestFunctional/serial/ExtraConfig 39.03
70 TestFunctional/serial/ComponentHealth 0.07
71 TestFunctional/serial/LogsCmd 1.13
72 TestFunctional/serial/LogsFileCmd 1.15
74 TestFunctional/parallel/ConfigCmd 0.58
75 TestFunctional/parallel/DashboardCmd 17.86
76 TestFunctional/parallel/DryRun 0.4
77 TestFunctional/parallel/InternationalLanguage 0.19
78 TestFunctional/parallel/StatusCmd 0.97
81 TestFunctional/parallel/ServiceCmd 10.35
82 TestFunctional/parallel/ServiceCmdConnect 25.59
83 TestFunctional/parallel/AddonsCmd 0.19
84 TestFunctional/parallel/PersistentVolumeClaim 53
86 TestFunctional/parallel/SSHCmd 0.54
87 TestFunctional/parallel/CpCmd 1.11
88 TestFunctional/parallel/MySQL 30.38
89 TestFunctional/parallel/FileSync 0.31
90 TestFunctional/parallel/CertSync 1.56
94 TestFunctional/parallel/NodeLabels 0.07
96 TestFunctional/parallel/NonActiveRuntimeDisabled 0.25
98 TestFunctional/parallel/License 0.28
99 TestFunctional/parallel/ProfileCmd/profile_not_create 0.41
100 TestFunctional/parallel/MountCmd/any-port 9.6
101 TestFunctional/parallel/ProfileCmd/profile_list 0.37
102 TestFunctional/parallel/ProfileCmd/profile_json_output 0.36
111 TestFunctional/parallel/DockerEnv/bash 1.18
112 TestFunctional/parallel/Version/short 0.08
113 TestFunctional/parallel/Version/components 0.57
114 TestFunctional/parallel/MountCmd/specific-port 2.04
115 TestFunctional/parallel/ImageCommands/ImageListShort 0.36
116 TestFunctional/parallel/ImageCommands/ImageListTable 0.31
117 TestFunctional/parallel/ImageCommands/ImageListJson 0.33
118 TestFunctional/parallel/ImageCommands/ImageListYaml 0.3
119 TestFunctional/parallel/ImageCommands/ImageBuild 5.28
120 TestFunctional/parallel/ImageCommands/Setup 1.44
121 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.86
122 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.66
123 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 5.18
124 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.87
125 TestFunctional/parallel/ImageCommands/ImageRemove 0.7
126 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 2.11
127 TestFunctional/parallel/UpdateContextCmd/no_changes 0.16
128 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.13
129 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.13
130 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 2.9
131 TestFunctional/delete_addon-resizer_images 0.08
132 TestFunctional/delete_my-image_image 0.02
133 TestFunctional/delete_minikube_cached_images 0.02
134 TestGvisorAddon 337.33
136 TestIngressAddonLegacy/StartLegacyK8sCluster 74.05
138 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 18.28
139 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.48
140 TestIngressAddonLegacy/serial/ValidateIngressAddons 38.76
143 TestJSONOutput/start/Command 71.33
144 TestJSONOutput/start/Audit 0
146 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
147 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
149 TestJSONOutput/pause/Command 0.62
150 TestJSONOutput/pause/Audit 0
152 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
153 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
155 TestJSONOutput/unpause/Command 0.6
156 TestJSONOutput/unpause/Audit 0
158 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
159 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
161 TestJSONOutput/stop/Command 8.12
162 TestJSONOutput/stop/Audit 0
164 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
165 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
166 TestErrorJSONOutput 0.26
170 TestMainNoArgs 0.07
171 TestMinikubeProfile 111.96
174 TestMountStart/serial/StartWithMountFirst 27.77
175 TestMountStart/serial/VerifyMountFirst 0.43
176 TestMountStart/serial/StartWithMountSecond 27.82
177 TestMountStart/serial/VerifyMountSecond 0.43
178 TestMountStart/serial/DeleteFirst 0.89
179 TestMountStart/serial/VerifyMountPostDelete 0.44
180 TestMountStart/serial/Stop 2.38
181 TestMountStart/serial/RestartStopped 22.84
182 TestMountStart/serial/VerifyMountPostStop 0.42
185 TestMultiNode/serial/FreshStart2Nodes 159.86
186 TestMultiNode/serial/DeployApp2Nodes 5.12
187 TestMultiNode/serial/PingHostFrom2Pods 0.99
188 TestMultiNode/serial/AddNode 63.52
189 TestMultiNode/serial/ProfileList 0.24
190 TestMultiNode/serial/CopyFile 8.24
191 TestMultiNode/serial/StopNode 4.06
192 TestMultiNode/serial/StartAfterStop 31.28
193 TestMultiNode/serial/RestartKeepsNodes 901.96
194 TestMultiNode/serial/DeleteNode 3.83
195 TestMultiNode/serial/StopMultiNode 5.68
196 TestMultiNode/serial/RestartMultiNode 614.17
202 TestPreload 182.39
204 TestScheduledStopUnix 126.08
205 TestSkaffold 92.06
208 TestRunningBinaryUpgrade 202.26
223 TestStoppedBinaryUpgrade/Setup 0.37
224 TestStoppedBinaryUpgrade/Upgrade 230.38
233 TestPause/serial/Start 83.57
234 TestPause/serial/SecondStartNoReconfiguration 74.18
236 TestNoKubernetes/serial/StartNoK8sWithVersion 0.14
237 TestNoKubernetes/serial/StartWithK8s 63.55
238 TestStoppedBinaryUpgrade/MinikubeLogs 1.46
239 TestPause/serial/Pause 0.7
240 TestPause/serial/VerifyStatus 0.3
241 TestPause/serial/Unpause 0.69
242 TestPause/serial/PauseAgain 0.81
243 TestPause/serial/DeletePaused 1.07
244 TestPause/serial/VerifyDeletedResources 0.31
245 TestNoKubernetes/serial/StartWithStopK8s 54.27
246 TestNoKubernetes/serial/Start 42.53
247 TestNetworkPlugins/group/auto/Start 89.4
248 TestNoKubernetes/serial/VerifyK8sNotRunning 0.25
249 TestNoKubernetes/serial/ProfileList 1.31
250 TestNoKubernetes/serial/Stop 2.22
251 TestNoKubernetes/serial/StartNoArgs 40.73
252 TestNetworkPlugins/group/kindnet/Start 125.16
253 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.26
254 TestNetworkPlugins/group/cilium/Start 152
255 TestNetworkPlugins/group/auto/KubeletFlags 0.23
256 TestNetworkPlugins/group/auto/NetCatPod 16.24
257 TestNetworkPlugins/group/auto/DNS 0.23
258 TestNetworkPlugins/group/auto/Localhost 0.18
259 TestNetworkPlugins/group/auto/HairPin 5.18
260 TestNetworkPlugins/group/calico/Start 337.3
261 TestNetworkPlugins/group/kindnet/ControllerPod 5.02
262 TestNetworkPlugins/group/kindnet/KubeletFlags 0.28
263 TestNetworkPlugins/group/kindnet/NetCatPod 13.57
264 TestNetworkPlugins/group/kindnet/DNS 0.24
265 TestNetworkPlugins/group/kindnet/Localhost 0.16
266 TestNetworkPlugins/group/kindnet/HairPin 0.17
267 TestNetworkPlugins/group/custom-flannel/Start 81.64
268 TestNetworkPlugins/group/false/Start 87.19
269 TestNetworkPlugins/group/cilium/ControllerPod 5.02
270 TestNetworkPlugins/group/cilium/KubeletFlags 0.26
271 TestNetworkPlugins/group/cilium/NetCatPod 14.24
272 TestNetworkPlugins/group/cilium/DNS 0.27
273 TestNetworkPlugins/group/cilium/Localhost 0.17
274 TestNetworkPlugins/group/cilium/HairPin 0.26
275 TestNetworkPlugins/group/enable-default-cni/Start 111.78
276 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.24
277 TestNetworkPlugins/group/custom-flannel/NetCatPod 12.37
278 TestNetworkPlugins/group/custom-flannel/DNS 0.24
279 TestNetworkPlugins/group/custom-flannel/Localhost 0.21
280 TestNetworkPlugins/group/custom-flannel/HairPin 0.23
281 TestNetworkPlugins/group/flannel/Start 91.09
282 TestNetworkPlugins/group/false/KubeletFlags 0.26
283 TestNetworkPlugins/group/false/NetCatPod 13.39
284 TestNetworkPlugins/group/false/DNS 0.21
285 TestNetworkPlugins/group/false/Localhost 0.16
286 TestNetworkPlugins/group/false/HairPin 5.16
287 TestNetworkPlugins/group/bridge/Start 78.05
288 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.27
289 TestNetworkPlugins/group/enable-default-cni/NetCatPod 13.39
290 TestNetworkPlugins/group/enable-default-cni/DNS 0.2
291 TestNetworkPlugins/group/enable-default-cni/Localhost 0.16
292 TestNetworkPlugins/group/enable-default-cni/HairPin 0.15
293 TestNetworkPlugins/group/flannel/ControllerPod 5.02
294 TestNetworkPlugins/group/kubenet/Start 79.91
295 TestNetworkPlugins/group/flannel/KubeletFlags 0.27
296 TestNetworkPlugins/group/flannel/NetCatPod 14.52
297 TestNetworkPlugins/group/bridge/KubeletFlags 0.24
298 TestNetworkPlugins/group/bridge/NetCatPod 12.36
299 TestNetworkPlugins/group/flannel/DNS 0.2
300 TestNetworkPlugins/group/flannel/Localhost 0.17
301 TestNetworkPlugins/group/flannel/HairPin 0.16
303 TestStartStop/group/old-k8s-version/serial/FirstStart 151.69
304 TestNetworkPlugins/group/bridge/DNS 0.24
305 TestNetworkPlugins/group/bridge/Localhost 0.22
306 TestNetworkPlugins/group/bridge/HairPin 0.17
308 TestStartStop/group/no-preload/serial/FirstStart 155.91
309 TestNetworkPlugins/group/kubenet/KubeletFlags 0.25
310 TestNetworkPlugins/group/kubenet/NetCatPod 14.41
311 TestNetworkPlugins/group/calico/ControllerPod 5.02
312 TestNetworkPlugins/group/kubenet/DNS 0.21
313 TestNetworkPlugins/group/kubenet/Localhost 0.18
315 TestNetworkPlugins/group/calico/KubeletFlags 0.28
316 TestNetworkPlugins/group/calico/NetCatPod 16.44
317 TestNetworkPlugins/group/calico/DNS 0.23
318 TestNetworkPlugins/group/calico/Localhost 0.19
319 TestNetworkPlugins/group/calico/HairPin 0.21
321 TestStartStop/group/embed-certs/serial/FirstStart 81.75
323 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 79.63
324 TestStartStop/group/old-k8s-version/serial/DeployApp 8.42
325 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.76
326 TestStartStop/group/old-k8s-version/serial/Stop 4.15
327 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.27
328 TestStartStop/group/old-k8s-version/serial/SecondStart 416.32
329 TestStartStop/group/no-preload/serial/DeployApp 13.47
330 TestStartStop/group/embed-certs/serial/DeployApp 9.47
331 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.89
332 TestStartStop/group/no-preload/serial/Stop 4.13
333 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.31
334 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.16
335 TestStartStop/group/no-preload/serial/SecondStart 329.32
336 TestStartStop/group/embed-certs/serial/Stop 13.16
337 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.23
338 TestStartStop/group/embed-certs/serial/SecondStart 338.26
339 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.49
340 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.74
341 TestStartStop/group/default-k8s-diff-port/serial/Stop 13.15
342 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.23
343 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 330.1
344 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 20.02
345 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.09
346 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 18.02
347 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.3
348 TestStartStop/group/no-preload/serial/Pause 3.1
350 TestStartStop/group/newest-cni/serial/FirstStart 75.26
351 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.09
352 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.27
353 TestStartStop/group/embed-certs/serial/Pause 2.79
354 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 13.02
355 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.02
356 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.09
357 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.1
358 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.28
359 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.86
360 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.26
361 TestStartStop/group/old-k8s-version/serial/Pause 3.37
362 TestStartStop/group/newest-cni/serial/DeployApp 0
363 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.83
364 TestStartStop/group/newest-cni/serial/Stop 13.14
365 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.2
366 TestStartStop/group/newest-cni/serial/SecondStart 38.72
367 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
368 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
369 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.26
370 TestStartStop/group/newest-cni/serial/Pause 2.34
x
+
TestDownloadOnly/v1.16.0/json-events (9.74s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-174010 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=kvm2 
aaa_download_only_test.go:71: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-174010 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=kvm2 : (9.737583616s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (9.74s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-174010
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-174010: exit status 85 (86.473535ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-174010 | jenkins | v1.27.1 | 31 Oct 22 17:40 UTC |          |
	|         | -p download-only-174010        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/10/31 17:40:10
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.19.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1031 17:40:10.660756   49541 out.go:296] Setting OutFile to fd 1 ...
	I1031 17:40:10.660908   49541 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 17:40:10.660919   49541 out.go:309] Setting ErrFile to fd 2...
	I1031 17:40:10.660925   49541 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 17:40:10.661018   49541 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15242-42743/.minikube/bin
	W1031 17:40:10.661141   49541 root.go:311] Error reading config file at /home/jenkins/minikube-integration/15242-42743/.minikube/config/config.json: open /home/jenkins/minikube-integration/15242-42743/.minikube/config/config.json: no such file or directory
	I1031 17:40:10.661753   49541 out.go:303] Setting JSON to true
	I1031 17:40:10.662571   49541 start.go:116] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4963,"bootTime":1667233048,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1021-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1031 17:40:10.662666   49541 start.go:126] virtualization: kvm guest
	I1031 17:40:10.665547   49541 out.go:97] [download-only-174010] minikube v1.27.1 on Ubuntu 20.04 (kvm/amd64)
	I1031 17:40:10.665649   49541 notify.go:220] Checking for updates...
	I1031 17:40:10.667066   49541 out.go:169] MINIKUBE_LOCATION=15242
	W1031 17:40:10.665692   49541 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/15242-42743/.minikube/cache/preloaded-tarball: no such file or directory
	I1031 17:40:10.669761   49541 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1031 17:40:10.671152   49541 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/15242-42743/kubeconfig
	I1031 17:40:10.672484   49541 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/15242-42743/.minikube
	I1031 17:40:10.673835   49541 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1031 17:40:10.676255   49541 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1031 17:40:10.676421   49541 driver.go:365] Setting default libvirt URI to qemu:///system
	I1031 17:40:10.710253   49541 out.go:97] Using the kvm2 driver based on user configuration
	I1031 17:40:10.710275   49541 start.go:282] selected driver: kvm2
	I1031 17:40:10.710287   49541 start.go:808] validating driver "kvm2" against <nil>
	I1031 17:40:10.710595   49541 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1031 17:40:10.710802   49541 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/15242-42743/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1031 17:40:10.725173   49541 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.27.1
	I1031 17:40:10.725238   49541 start_flags.go:303] no existing cluster config was found, will generate one from the flags 
	I1031 17:40:10.725738   49541 start_flags.go:384] Using suggested 6000MB memory alloc based on sys=32101MB, container=0MB
	I1031 17:40:10.725854   49541 start_flags.go:870] Wait components to verify : map[apiserver:true system_pods:true]
	I1031 17:40:10.725896   49541 cni.go:95] Creating CNI manager for ""
	I1031 17:40:10.725916   49541 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I1031 17:40:10.725933   49541 start_flags.go:317] config:
	{Name:download-only-174010 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-174010 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunt
ime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1031 17:40:10.726142   49541 iso.go:124] acquiring lock: {Name:mk1b8df3d0e7e7151d07f634c55bc8cb360d70d6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1031 17:40:10.728375   49541 out.go:97] Downloading VM boot image ...
	I1031 17:40:10.728426   49541 download.go:101] Downloading: https://storage.googleapis.com/minikube-builds/iso/15159/minikube-v1.27.0-1666206003-15159-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/15159/minikube-v1.27.0-1666206003-15159-amd64.iso.sha256 -> /home/jenkins/minikube-integration/15242-42743/.minikube/cache/iso/amd64/minikube-v1.27.0-1666206003-15159-amd64.iso
	I1031 17:40:14.879529   49541 out.go:97] Starting control plane node download-only-174010 in cluster download-only-174010
	I1031 17:40:14.879558   49541 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1031 17:40:14.976907   49541 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I1031 17:40:14.976969   49541 cache.go:57] Caching tarball of preloaded images
	I1031 17:40:14.977143   49541 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1031 17:40:14.979275   49541 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I1031 17:40:14.979296   49541 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I1031 17:40:15.078868   49541 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4?checksum=md5:326f3ce331abb64565b50b8c9e791244 -> /home/jenkins/minikube-integration/15242-42743/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-174010"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.25.3/json-events (5.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.25.3/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-174010 --force --alsologtostderr --kubernetes-version=v1.25.3 --container-runtime=docker --driver=kvm2 
aaa_download_only_test.go:71: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-174010 --force --alsologtostderr --kubernetes-version=v1.25.3 --container-runtime=docker --driver=kvm2 : (5.098386283s)
--- PASS: TestDownloadOnly/v1.25.3/json-events (5.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.25.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.25.3/preload-exists
--- PASS: TestDownloadOnly/v1.25.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.25.3/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.25.3/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-174010
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-174010: exit status 85 (83.288441ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-174010 | jenkins | v1.27.1 | 31 Oct 22 17:40 UTC |          |
	|         | -p download-only-174010        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-174010 | jenkins | v1.27.1 | 31 Oct 22 17:40 UTC |          |
	|         | -p download-only-174010        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.25.3   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/10/31 17:40:20
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.19.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1031 17:40:20.491453   49576 out.go:296] Setting OutFile to fd 1 ...
	I1031 17:40:20.491588   49576 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 17:40:20.491600   49576 out.go:309] Setting ErrFile to fd 2...
	I1031 17:40:20.491606   49576 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 17:40:20.491733   49576 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15242-42743/.minikube/bin
	W1031 17:40:20.491864   49576 root.go:311] Error reading config file at /home/jenkins/minikube-integration/15242-42743/.minikube/config/config.json: open /home/jenkins/minikube-integration/15242-42743/.minikube/config/config.json: no such file or directory
	I1031 17:40:20.492341   49576 out.go:303] Setting JSON to true
	I1031 17:40:20.493226   49576 start.go:116] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4972,"bootTime":1667233048,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1021-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1031 17:40:20.493291   49576 start.go:126] virtualization: kvm guest
	I1031 17:40:20.495781   49576 out.go:97] [download-only-174010] minikube v1.27.1 on Ubuntu 20.04 (kvm/amd64)
	I1031 17:40:20.495881   49576 notify.go:220] Checking for updates...
	I1031 17:40:20.497569   49576 out.go:169] MINIKUBE_LOCATION=15242
	I1031 17:40:20.499304   49576 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1031 17:40:20.500814   49576 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/15242-42743/kubeconfig
	I1031 17:40:20.502392   49576 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/15242-42743/.minikube
	I1031 17:40:20.504022   49576 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1031 17:40:20.506954   49576 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1031 17:40:20.507395   49576 config.go:180] Loaded profile config "download-only-174010": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	W1031 17:40:20.507453   49576 start.go:716] api.Load failed for download-only-174010: filestore "download-only-174010": Docker machine "download-only-174010" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1031 17:40:20.507507   49576 driver.go:365] Setting default libvirt URI to qemu:///system
	W1031 17:40:20.507533   49576 start.go:716] api.Load failed for download-only-174010: filestore "download-only-174010": Docker machine "download-only-174010" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1031 17:40:20.540085   49576 out.go:97] Using the kvm2 driver based on existing profile
	I1031 17:40:20.540107   49576 start.go:282] selected driver: kvm2
	I1031 17:40:20.540118   49576 start.go:808] validating driver "kvm2" against &{Name:download-only-174010 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15159/minikube-v1.27.0-1666206003-15159-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.16.0 ClusterName:download-only-174010 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1031 17:40:20.540472   49576 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1031 17:40:20.540677   49576 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/15242-42743/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1031 17:40:20.555755   49576 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.27.1
	I1031 17:40:20.556419   49576 cni.go:95] Creating CNI manager for ""
	I1031 17:40:20.556439   49576 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I1031 17:40:20.556457   49576 start_flags.go:317] config:
	{Name:download-only-174010 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15159/minikube-v1.27.0-1666206003-15159-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:download-only-174010 Namespace:defa
ult APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: Sock
etVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1031 17:40:20.556610   49576 iso.go:124] acquiring lock: {Name:mk1b8df3d0e7e7151d07f634c55bc8cb360d70d6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1031 17:40:20.558645   49576 out.go:97] Starting control plane node download-only-174010 in cluster download-only-174010
	I1031 17:40:20.558670   49576 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I1031 17:40:20.660453   49576 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.25.3/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4
	I1031 17:40:20.660500   49576 cache.go:57] Caching tarball of preloaded images
	I1031 17:40:20.660783   49576 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I1031 17:40:20.662994   49576 out.go:97] Downloading Kubernetes v1.25.3 preload ...
	I1031 17:40:20.663015   49576 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4 ...
	I1031 17:40:20.759936   49576 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.25.3/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4?checksum=md5:624cb874287e7e3d793b79e4205a7f98 -> /home/jenkins/minikube-integration/15242-42743/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4
	I1031 17:40:24.041162   49576 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4 ...
	I1031 17:40:24.041280   49576 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/15242-42743/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4 ...
	I1031 17:40:24.813574   49576 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.3 on docker
	I1031 17:40:24.813743   49576 profile.go:148] Saving config to /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/download-only-174010/config.json ...
	I1031 17:40:24.813963   49576 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I1031 17:40:24.814231   49576 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.25.3/bin/linux/amd64/kubectl?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.25.3/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/15242-42743/.minikube/cache/linux/amd64/v1.25.3/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-174010"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.25.3/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.18s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:191: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.18s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.17s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:203: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-174010
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.17s)

                                                
                                    
x
+
TestBinaryMirror (0.56s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:310: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-174026 --alsologtostderr --binary-mirror http://127.0.0.1:36981 --driver=kvm2 
helpers_test.go:175: Cleaning up "binary-mirror-174026" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-174026
--- PASS: TestBinaryMirror (0.56s)

                                                
                                    
x
+
TestOffline (95.77s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-docker-183258 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2 

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-docker-183258 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2 : (1m34.72679111s)
helpers_test.go:175: Cleaning up "offline-docker-183258" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-docker-183258
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-docker-183258: (1.041883945s)
--- PASS: TestOffline (95.77s)

                                                
                                    
x
+
TestAddons/Setup (146.17s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:76: (dbg) Run:  out/minikube-linux-amd64 start -p addons-174026 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --driver=kvm2  --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:76: (dbg) Done: out/minikube-linux-amd64 start -p addons-174026 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --driver=kvm2  --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m26.171111215s)
--- PASS: TestAddons/Setup (146.17s)

                                                
                                    
x
+
TestAddons/parallel/Registry (22.65s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:283: registry stabilized in 22.481376ms

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:285: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/Registry
helpers_test.go:342: "registry-s9jj8" [e56d304a-da41-45df-b895-18b654c7502b] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:285: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.011949396s
addons_test.go:288: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:342: "registry-proxy-xqd5k" [f126acc3-39c3-431f-a259-24963bf70b9c] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:288: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.009977688s
addons_test.go:293: (dbg) Run:  kubectl --context addons-174026 delete po -l run=registry-test --now
addons_test.go:298: (dbg) Run:  kubectl --context addons-174026 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:298: (dbg) Done: kubectl --context addons-174026 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (11.953544095s)
addons_test.go:312: (dbg) Run:  out/minikube-linux-amd64 -p addons-174026 ip
2022/10/31 17:43:15 [DEBUG] GET http://192.168.39.110:5000
addons_test.go:341: (dbg) Run:  out/minikube-linux-amd64 -p addons-174026 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (22.65s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (28.04s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:165: (dbg) Run:  kubectl --context addons-174026 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:185: (dbg) Run:  kubectl --context addons-174026 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:198: (dbg) Run:  kubectl --context addons-174026 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:203: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:342: "nginx" [ffce3ab0-5fdd-44f6-a406-6fec8a813bc4] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
helpers_test.go:342: "nginx" [ffce3ab0-5fdd-44f6-a406-6fec8a813bc4] Running

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:203: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 17.029276777s
addons_test.go:215: (dbg) Run:  out/minikube-linux-amd64 -p addons-174026 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:239: (dbg) Run:  kubectl --context addons-174026 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p addons-174026 ip
addons_test.go:250: (dbg) Run:  nslookup hello-john.test 192.168.39.110
addons_test.go:259: (dbg) Run:  out/minikube-linux-amd64 -p addons-174026 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:259: (dbg) Done: out/minikube-linux-amd64 -p addons-174026 addons disable ingress-dns --alsologtostderr -v=1: (1.419255181s)
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-174026 addons disable ingress --alsologtostderr -v=1

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:264: (dbg) Done: out/minikube-linux-amd64 -p addons-174026 addons disable ingress --alsologtostderr -v=1: (7.615459818s)
--- PASS: TestAddons/parallel/Ingress (28.04s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.68s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:360: metrics-server stabilized in 3.579309ms
addons_test.go:362: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:342: "metrics-server-769cd898cd-t5bxc" [7f1d2c6e-a7dd-4ad4-9547-1f9e383cecf9] Running

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:362: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.015862648s
addons_test.go:368: (dbg) Run:  kubectl --context addons-174026 top pods -n kube-system
addons_test.go:385: (dbg) Run:  out/minikube-linux-amd64 -p addons-174026 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.68s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (18.7s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:409: tiller-deploy stabilized in 2.286805ms
addons_test.go:411: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:342: "tiller-deploy-696b5bfbb7-j5nck" [7ce44c8f-3372-46ea-a29a-18e1fa75ef88] Running

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:411: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.008685651s
addons_test.go:426: (dbg) Run:  kubectl --context addons-174026 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:426: (dbg) Done: kubectl --context addons-174026 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version: (13.150471102s)
addons_test.go:443: (dbg) Run:  out/minikube-linux-amd64 -p addons-174026 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (18.70s)

                                                
                                    
x
+
TestAddons/parallel/CSI (50.91s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:514: csi-hostpath-driver pods stabilized in 29.455212ms
addons_test.go:517: (dbg) Run:  kubectl --context addons-174026 create -f testdata/csi-hostpath-driver/pvc.yaml

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:522: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:392: (dbg) Run:  kubectl --context addons-174026 get pvc hpvc -o jsonpath={.status.phase} -n default

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:392: (dbg) Run:  kubectl --context addons-174026 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:527: (dbg) Run:  kubectl --context addons-174026 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:532: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:342: "task-pv-pod" [df3a7f7b-e77d-4a13-adb3-d2d13aecd47e] Pending

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:342: "task-pv-pod" [df3a7f7b-e77d-4a13-adb3-d2d13aecd47e] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:342: "task-pv-pod" [df3a7f7b-e77d-4a13-adb3-d2d13aecd47e] Running

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:532: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 21.013880862s
addons_test.go:537: (dbg) Run:  kubectl --context addons-174026 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:542: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:417: (dbg) Run:  kubectl --context addons-174026 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:417: (dbg) Run:  kubectl --context addons-174026 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:547: (dbg) Run:  kubectl --context addons-174026 delete pod task-pv-pod
addons_test.go:553: (dbg) Run:  kubectl --context addons-174026 delete pvc hpvc
addons_test.go:559: (dbg) Run:  kubectl --context addons-174026 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:564: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:392: (dbg) Run:  kubectl --context addons-174026 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:392: (dbg) Run:  kubectl --context addons-174026 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:569: (dbg) Run:  kubectl --context addons-174026 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:574: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:342: "task-pv-pod-restore" [8eaeb40a-3870-47c9-97b3-1b8918d0c8a4] Pending
helpers_test.go:342: "task-pv-pod-restore" [8eaeb40a-3870-47c9-97b3-1b8918d0c8a4] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:342: "task-pv-pod-restore" [8eaeb40a-3870-47c9-97b3-1b8918d0c8a4] Running

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:574: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 15.013158875s
addons_test.go:579: (dbg) Run:  kubectl --context addons-174026 delete pod task-pv-pod-restore
addons_test.go:579: (dbg) Done: kubectl --context addons-174026 delete pod task-pv-pod-restore: (1.209179586s)
addons_test.go:583: (dbg) Run:  kubectl --context addons-174026 delete pvc hpvc-restore
addons_test.go:587: (dbg) Run:  kubectl --context addons-174026 delete volumesnapshot new-snapshot-demo
addons_test.go:591: (dbg) Run:  out/minikube-linux-amd64 -p addons-174026 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:591: (dbg) Done: out/minikube-linux-amd64 -p addons-174026 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.854844387s)
addons_test.go:595: (dbg) Run:  out/minikube-linux-amd64 -p addons-174026 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (50.91s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (11.21s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:738: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-174026 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:738: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-174026 --alsologtostderr -v=1: (1.198466589s)
addons_test.go:743: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:342: "headlamp-5f4cf474d8-5qm6g" [61b03237-716f-4f0d-8053-055f8e87706c] Pending

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
helpers_test.go:342: "headlamp-5f4cf474d8-5qm6g" [61b03237-716f-4f0d-8053-055f8e87706c] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
helpers_test.go:342: "headlamp-5f4cf474d8-5qm6g" [61b03237-716f-4f0d-8053-055f8e87706c] Running

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:743: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.011409501s
--- PASS: TestAddons/parallel/Headlamp (11.21s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.49s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:759: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
helpers_test.go:342: "cloud-spanner-emulator-6c47ff8fb6-4698w" [a96c7ac9-014d-4dad-acef-6220beebcf08] Running

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:759: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.009262181s
addons_test.go:762: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-174026
--- PASS: TestAddons/parallel/CloudSpanner (5.49s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth (44.62s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth
addons_test.go:606: (dbg) Run:  kubectl --context addons-174026 create -f testdata/busybox.yaml
addons_test.go:613: (dbg) Run:  kubectl --context addons-174026 create sa gcp-auth-test
addons_test.go:619: (dbg) TestAddons/serial/GCPAuth: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [3bd1c34d-3af4-4524-94da-afb46e951272] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [3bd1c34d-3af4-4524-94da-afb46e951272] Running
addons_test.go:619: (dbg) TestAddons/serial/GCPAuth: integration-test=busybox healthy within 8.010105944s
addons_test.go:625: (dbg) Run:  kubectl --context addons-174026 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:637: (dbg) Run:  kubectl --context addons-174026 describe sa gcp-auth-test
addons_test.go:675: (dbg) Run:  kubectl --context addons-174026 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
addons_test.go:688: (dbg) Run:  out/minikube-linux-amd64 -p addons-174026 addons disable gcp-auth --alsologtostderr -v=1
addons_test.go:688: (dbg) Done: out/minikube-linux-amd64 -p addons-174026 addons disable gcp-auth --alsologtostderr -v=1: (6.082315819s)
addons_test.go:704: (dbg) Run:  out/minikube-linux-amd64 -p addons-174026 addons enable gcp-auth
addons_test.go:704: (dbg) Done: out/minikube-linux-amd64 -p addons-174026 addons enable gcp-auth: (2.03885541s)
addons_test.go:710: (dbg) Run:  kubectl --context addons-174026 apply -f testdata/private-image.yaml
addons_test.go:717: (dbg) TestAddons/serial/GCPAuth: waiting 8m0s for pods matching "integration-test=private-image" in namespace "default" ...
helpers_test.go:342: "private-image-5c86c669bd-6dd5p" [76949f4a-42aa-4eb6-b60b-3dc01f365620] Pending
helpers_test.go:342: "private-image-5c86c669bd-6dd5p" [76949f4a-42aa-4eb6-b60b-3dc01f365620] Pending / Ready:ContainersNotReady (containers with unready status: [private-image]) / ContainersReady:ContainersNotReady (containers with unready status: [private-image])
helpers_test.go:342: "private-image-5c86c669bd-6dd5p" [76949f4a-42aa-4eb6-b60b-3dc01f365620] Running
addons_test.go:717: (dbg) TestAddons/serial/GCPAuth: integration-test=private-image healthy within 18.013031514s
addons_test.go:723: (dbg) Run:  kubectl --context addons-174026 apply -f testdata/private-image-eu.yaml
addons_test.go:728: (dbg) TestAddons/serial/GCPAuth: waiting 8m0s for pods matching "integration-test=private-image-eu" in namespace "default" ...
helpers_test.go:342: "private-image-eu-64c96f687b-c6jbl" [45de1928-5e00-447d-b9cb-3501d0a68d72] Pending / Ready:ContainersNotReady (containers with unready status: [private-image-eu]) / ContainersReady:ContainersNotReady (containers with unready status: [private-image-eu])
helpers_test.go:342: "private-image-eu-64c96f687b-c6jbl" [45de1928-5e00-447d-b9cb-3501d0a68d72] Running
addons_test.go:728: (dbg) TestAddons/serial/GCPAuth: integration-test=private-image-eu healthy within 9.007558625s
--- PASS: TestAddons/serial/GCPAuth (44.62s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (4.4s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:135: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-174026
addons_test.go:135: (dbg) Done: out/minikube-linux-amd64 stop -p addons-174026: (4.183529257s)
addons_test.go:139: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-174026
addons_test.go:143: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-174026
--- PASS: TestAddons/StoppedEnableDisable (4.40s)

                                                
                                    
x
+
TestCertOptions (75s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-183948 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2 

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-183948 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2 : (1m13.412166008s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-183948 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-183948 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-183948 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-183948" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-183948
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-183948: (1.068494179s)
--- PASS: TestCertOptions (75.00s)

                                                
                                    
x
+
TestCertExpiration (311.86s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-183835 --memory=2048 --cert-expiration=3m --driver=kvm2 

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-183835 --memory=2048 --cert-expiration=3m --driver=kvm2 : (1m37.86404561s)
E1031 18:40:31.047445   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/skaffold-183126/client.crt: no such file or directory

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-183835 --memory=2048 --cert-expiration=8760h --driver=kvm2 

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-183835 --memory=2048 --cert-expiration=8760h --driver=kvm2 : (32.789685657s)
helpers_test.go:175: Cleaning up "cert-expiration-183835" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-183835
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-183835: (1.205430323s)
--- PASS: TestCertExpiration (311.86s)

                                                
                                    
x
+
TestDockerFlags (115.42s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:45: (dbg) Run:  out/minikube-linux-amd64 start -p docker-flags-183842 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=kvm2 
E1031 18:39:09.126964   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/skaffold-183126/client.crt: no such file or directory

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:45: (dbg) Done: out/minikube-linux-amd64 start -p docker-flags-183842 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=kvm2 : (1m53.805306051s)
docker_test.go:50: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-183842 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:61: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-183842 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-183842" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-flags-183842
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-flags-183842: (1.081808182s)
--- PASS: TestDockerFlags (115.42s)

                                                
                                    
x
+
TestForceSystemdFlag (110.09s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-183258 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2 

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-183258 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2 : (1m48.135988175s)
docker_test.go:104: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-183258 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-183258" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-183258
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-183258: (1.627969906s)
--- PASS: TestForceSystemdFlag (110.09s)

                                                
                                    
x
+
TestForceSystemdEnv (75.96s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:149: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-183832 --memory=2048 --alsologtostderr -v=5 --driver=kvm2 

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:149: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-183832 --memory=2048 --alsologtostderr -v=5 --driver=kvm2 : (1m14.599577751s)
docker_test.go:104: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-183832 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-183832" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-183832
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-183832: (1.107328504s)
--- PASS: TestForceSystemdEnv (75.96s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (15.18s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
E1031 18:38:28.166695   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/skaffold-183126/client.crt: no such file or directory
--- PASS: TestKVMDriverInstallOrUpdate (15.18s)

                                                
                                    
x
+
TestErrorSpam/setup (53.08s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-174433 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-174433 --driver=kvm2 
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-174433 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-174433 --driver=kvm2 : (53.081861837s)
--- PASS: TestErrorSpam/setup (53.08s)

                                                
                                    
x
+
TestErrorSpam/start (0.4s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-174433 --log_dir /tmp/nospam-174433 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-174433 --log_dir /tmp/nospam-174433 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-174433 --log_dir /tmp/nospam-174433 start --dry-run
--- PASS: TestErrorSpam/start (0.40s)

                                                
                                    
x
+
TestErrorSpam/status (0.81s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-174433 --log_dir /tmp/nospam-174433 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-174433 --log_dir /tmp/nospam-174433 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-174433 --log_dir /tmp/nospam-174433 status
--- PASS: TestErrorSpam/status (0.81s)

                                                
                                    
x
+
TestErrorSpam/pause (1.34s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-174433 --log_dir /tmp/nospam-174433 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-174433 --log_dir /tmp/nospam-174433 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-174433 --log_dir /tmp/nospam-174433 pause
--- PASS: TestErrorSpam/pause (1.34s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.42s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-174433 --log_dir /tmp/nospam-174433 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-174433 --log_dir /tmp/nospam-174433 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-174433 --log_dir /tmp/nospam-174433 unpause
--- PASS: TestErrorSpam/unpause (1.42s)

                                                
                                    
x
+
TestErrorSpam/stop (12.53s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-174433 --log_dir /tmp/nospam-174433 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-174433 --log_dir /tmp/nospam-174433 stop: (12.336300283s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-174433 --log_dir /tmp/nospam-174433 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-174433 --log_dir /tmp/nospam-174433 stop
--- PASS: TestErrorSpam/stop (12.53s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1782: local sync path: /home/jenkins/minikube-integration/15242-42743/.minikube/files/etc/test/nested/copy/49529/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (69.93s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2161: (dbg) Run:  out/minikube-linux-amd64 start -p functional-174543 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2 
functional_test.go:2161: (dbg) Done: out/minikube-linux-amd64 start -p functional-174543 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2 : (1m9.930721346s)
--- PASS: TestFunctional/serial/StartWithProxy (69.93s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (37.52s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:652: (dbg) Run:  out/minikube-linux-amd64 start -p functional-174543 --alsologtostderr -v=8
functional_test.go:652: (dbg) Done: out/minikube-linux-amd64 start -p functional-174543 --alsologtostderr -v=8: (37.516918553s)
functional_test.go:656: soft start took 37.517592143s for "functional-174543" cluster.
--- PASS: TestFunctional/serial/SoftStart (37.52s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:674: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:689: (dbg) Run:  kubectl --context functional-174543 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.27s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1042: (dbg) Run:  out/minikube-linux-amd64 -p functional-174543 cache add k8s.gcr.io/pause:3.1
functional_test.go:1042: (dbg) Done: out/minikube-linux-amd64 -p functional-174543 cache add k8s.gcr.io/pause:3.1: (1.420466137s)
functional_test.go:1042: (dbg) Run:  out/minikube-linux-amd64 -p functional-174543 cache add k8s.gcr.io/pause:3.3
functional_test.go:1042: (dbg) Done: out/minikube-linux-amd64 -p functional-174543 cache add k8s.gcr.io/pause:3.3: (1.580239036s)
functional_test.go:1042: (dbg) Run:  out/minikube-linux-amd64 -p functional-174543 cache add k8s.gcr.io/pause:latest
functional_test.go:1042: (dbg) Done: out/minikube-linux-amd64 -p functional-174543 cache add k8s.gcr.io/pause:latest: (1.269771888s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.27s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.57s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1070: (dbg) Run:  docker build -t minikube-local-cache-test:functional-174543 /tmp/TestFunctionalserialCacheCmdcacheadd_local3156272783/001
functional_test.go:1082: (dbg) Run:  out/minikube-linux-amd64 -p functional-174543 cache add minikube-local-cache-test:functional-174543
functional_test.go:1082: (dbg) Done: out/minikube-linux-amd64 -p functional-174543 cache add minikube-local-cache-test:functional-174543: (1.326340513s)
functional_test.go:1087: (dbg) Run:  out/minikube-linux-amd64 -p functional-174543 cache delete minikube-local-cache-test:functional-174543
functional_test.go:1076: (dbg) Run:  docker rmi minikube-local-cache-test:functional-174543
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.57s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3
functional_test.go:1095: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1103: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.24s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 -p functional-174543 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.24s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.62s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1140: (dbg) Run:  out/minikube-linux-amd64 -p functional-174543 ssh sudo docker rmi k8s.gcr.io/pause:latest
functional_test.go:1146: (dbg) Run:  out/minikube-linux-amd64 -p functional-174543 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
functional_test.go:1146: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-174543 ssh sudo crictl inspecti k8s.gcr.io/pause:latest: exit status 1 (244.044086ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "k8s.gcr.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1151: (dbg) Run:  out/minikube-linux-amd64 -p functional-174543 cache reload
functional_test.go:1156: (dbg) Run:  out/minikube-linux-amd64 -p functional-174543 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.62s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1165: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:3.1
functional_test.go:1165: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:709: (dbg) Run:  out/minikube-linux-amd64 -p functional-174543 kubectl -- --context functional-174543 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:734: (dbg) Run:  out/kubectl --context functional-174543 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (39.03s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:750: (dbg) Run:  out/minikube-linux-amd64 start -p functional-174543 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1031 17:47:52.853905   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/addons-174026/client.crt: no such file or directory
E1031 17:47:52.860208   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/addons-174026/client.crt: no such file or directory
E1031 17:47:52.870875   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/addons-174026/client.crt: no such file or directory
E1031 17:47:52.891023   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/addons-174026/client.crt: no such file or directory
E1031 17:47:52.931345   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/addons-174026/client.crt: no such file or directory
E1031 17:47:53.011727   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/addons-174026/client.crt: no such file or directory
E1031 17:47:53.172135   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/addons-174026/client.crt: no such file or directory
E1031 17:47:53.493249   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/addons-174026/client.crt: no such file or directory
E1031 17:47:54.134267   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/addons-174026/client.crt: no such file or directory
E1031 17:47:55.414926   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/addons-174026/client.crt: no such file or directory
E1031 17:47:57.975522   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/addons-174026/client.crt: no such file or directory
E1031 17:48:03.096278   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/addons-174026/client.crt: no such file or directory
E1031 17:48:13.336534   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/addons-174026/client.crt: no such file or directory
functional_test.go:750: (dbg) Done: out/minikube-linux-amd64 start -p functional-174543 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (39.03328295s)
functional_test.go:754: restart took 39.033418058s for "functional-174543" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (39.03s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:803: (dbg) Run:  kubectl --context functional-174543 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:818: etcd phase: Running
functional_test.go:828: etcd status: Ready
functional_test.go:818: kube-apiserver phase: Running
functional_test.go:828: kube-apiserver status: Ready
functional_test.go:818: kube-controller-manager phase: Running
functional_test.go:828: kube-controller-manager status: Ready
functional_test.go:818: kube-scheduler phase: Running
functional_test.go:828: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.13s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1229: (dbg) Run:  out/minikube-linux-amd64 -p functional-174543 logs
functional_test.go:1229: (dbg) Done: out/minikube-linux-amd64 -p functional-174543 logs: (1.125131178s)
--- PASS: TestFunctional/serial/LogsCmd (1.13s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.15s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1243: (dbg) Run:  out/minikube-linux-amd64 -p functional-174543 logs --file /tmp/TestFunctionalserialLogsFileCmd2340898057/001/logs.txt
functional_test.go:1243: (dbg) Done: out/minikube-linux-amd64 -p functional-174543 logs --file /tmp/TestFunctionalserialLogsFileCmd2340898057/001/logs.txt: (1.154122715s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.15s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1192: (dbg) Run:  out/minikube-linux-amd64 -p functional-174543 config unset cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1192: (dbg) Run:  out/minikube-linux-amd64 -p functional-174543 config get cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1192: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-174543 config get cpus: exit status 14 (85.37774ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1192: (dbg) Run:  out/minikube-linux-amd64 -p functional-174543 config set cpus 2

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1192: (dbg) Run:  out/minikube-linux-amd64 -p functional-174543 config get cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1192: (dbg) Run:  out/minikube-linux-amd64 -p functional-174543 config unset cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1192: (dbg) Run:  out/minikube-linux-amd64 -p functional-174543 config get cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1192: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-174543 config get cpus: exit status 14 (86.223419ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (17.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:898: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-174543 --alsologtostderr -v=1]

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:903: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-174543 --alsologtostderr -v=1] ...
helpers_test.go:506: unable to kill pid 53436: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (17.86s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:967: (dbg) Run:  out/minikube-linux-amd64 start -p functional-174543 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:967: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-174543 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 : exit status 23 (216.032231ms)

                                                
                                                
-- stdout --
	* [functional-174543] minikube v1.27.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15242
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15242-42743/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15242-42743/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1031 17:48:21.586104   53033 out.go:296] Setting OutFile to fd 1 ...
	I1031 17:48:21.586332   53033 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 17:48:21.586350   53033 out.go:309] Setting ErrFile to fd 2...
	I1031 17:48:21.586358   53033 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 17:48:21.586532   53033 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15242-42743/.minikube/bin
	I1031 17:48:21.587307   53033 out.go:303] Setting JSON to false
	I1031 17:48:21.588645   53033 start.go:116] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5454,"bootTime":1667233048,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1021-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1031 17:48:21.588741   53033 start.go:126] virtualization: kvm guest
	I1031 17:48:21.591880   53033 out.go:177] * [functional-174543] minikube v1.27.1 on Ubuntu 20.04 (kvm/amd64)
	I1031 17:48:21.593042   53033 notify.go:220] Checking for updates...
	I1031 17:48:21.594351   53033 out.go:177]   - MINIKUBE_LOCATION=15242
	I1031 17:48:21.596417   53033 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1031 17:48:21.597894   53033 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/15242-42743/kubeconfig
	I1031 17:48:21.599354   53033 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/15242-42743/.minikube
	I1031 17:48:21.600749   53033 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1031 17:48:21.602669   53033 config.go:180] Loaded profile config "functional-174543": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1031 17:48:21.603234   53033 main.go:134] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1031 17:48:21.603291   53033 main.go:134] libmachine: Launching plugin server for driver kvm2
	I1031 17:48:21.638509   53033 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:39213
	I1031 17:48:21.638974   53033 main.go:134] libmachine: () Calling .GetVersion
	I1031 17:48:21.639626   53033 main.go:134] libmachine: Using API Version  1
	I1031 17:48:21.639660   53033 main.go:134] libmachine: () Calling .SetConfigRaw
	I1031 17:48:21.640029   53033 main.go:134] libmachine: () Calling .GetMachineName
	I1031 17:48:21.640243   53033 main.go:134] libmachine: (functional-174543) Calling .DriverName
	I1031 17:48:21.640461   53033 driver.go:365] Setting default libvirt URI to qemu:///system
	I1031 17:48:21.640883   53033 main.go:134] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1031 17:48:21.640934   53033 main.go:134] libmachine: Launching plugin server for driver kvm2
	I1031 17:48:21.664644   53033 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:44607
	I1031 17:48:21.665081   53033 main.go:134] libmachine: () Calling .GetVersion
	I1031 17:48:21.665653   53033 main.go:134] libmachine: Using API Version  1
	I1031 17:48:21.665675   53033 main.go:134] libmachine: () Calling .SetConfigRaw
	I1031 17:48:21.666022   53033 main.go:134] libmachine: () Calling .GetMachineName
	I1031 17:48:21.666237   53033 main.go:134] libmachine: (functional-174543) Calling .DriverName
	I1031 17:48:21.707316   53033 out.go:177] * Using the kvm2 driver based on existing profile
	I1031 17:48:21.708740   53033 start.go:282] selected driver: kvm2
	I1031 17:48:21.708779   53033 start.go:808] validating driver "kvm2" against &{Name:functional-174543 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15159/minikube-v1.27.0-1666206003-15159-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.25.3 ClusterName:functional-174543 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.39.161 Port:8441 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:fal
se nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1031 17:48:21.708944   53033 start.go:819] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1031 17:48:21.711636   53033 out.go:177] 
	W1031 17:48:21.713055   53033 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1031 17:48:21.714470   53033 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:984: (dbg) Run:  out/minikube-linux-amd64 start -p functional-174543 --dry-run --alsologtostderr -v=1 --driver=kvm2 
--- PASS: TestFunctional/parallel/DryRun (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1013: (dbg) Run:  out/minikube-linux-amd64 start -p functional-174543 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1013: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-174543 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 : exit status 23 (185.196583ms)

                                                
                                                
-- stdout --
	* [functional-174543] minikube v1.27.1 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15242
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15242-42743/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15242-42743/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1031 17:48:21.402745   52975 out.go:296] Setting OutFile to fd 1 ...
	I1031 17:48:21.402877   52975 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 17:48:21.402894   52975 out.go:309] Setting ErrFile to fd 2...
	I1031 17:48:21.402901   52975 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 17:48:21.403132   52975 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15242-42743/.minikube/bin
	I1031 17:48:21.403816   52975 out.go:303] Setting JSON to false
	I1031 17:48:21.404745   52975 start.go:116] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5453,"bootTime":1667233048,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1021-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1031 17:48:21.404812   52975 start.go:126] virtualization: kvm guest
	I1031 17:48:21.407659   52975 out.go:177] * [functional-174543] minikube v1.27.1 sur Ubuntu 20.04 (kvm/amd64)
	I1031 17:48:21.409274   52975 out.go:177]   - MINIKUBE_LOCATION=15242
	I1031 17:48:21.409200   52975 notify.go:220] Checking for updates...
	I1031 17:48:21.410806   52975 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1031 17:48:21.412453   52975 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/15242-42743/kubeconfig
	I1031 17:48:21.413812   52975 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/15242-42743/.minikube
	I1031 17:48:21.415350   52975 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1031 17:48:21.417383   52975 config.go:180] Loaded profile config "functional-174543": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1031 17:48:21.417976   52975 main.go:134] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1031 17:48:21.418034   52975 main.go:134] libmachine: Launching plugin server for driver kvm2
	I1031 17:48:21.434939   52975 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:43213
	I1031 17:48:21.435414   52975 main.go:134] libmachine: () Calling .GetVersion
	I1031 17:48:21.435951   52975 main.go:134] libmachine: Using API Version  1
	I1031 17:48:21.435974   52975 main.go:134] libmachine: () Calling .SetConfigRaw
	I1031 17:48:21.436349   52975 main.go:134] libmachine: () Calling .GetMachineName
	I1031 17:48:21.436554   52975 main.go:134] libmachine: (functional-174543) Calling .DriverName
	I1031 17:48:21.436773   52975 driver.go:365] Setting default libvirt URI to qemu:///system
	I1031 17:48:21.437178   52975 main.go:134] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1031 17:48:21.437219   52975 main.go:134] libmachine: Launching plugin server for driver kvm2
	I1031 17:48:21.454379   52975 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:37963
	I1031 17:48:21.454781   52975 main.go:134] libmachine: () Calling .GetVersion
	I1031 17:48:21.455396   52975 main.go:134] libmachine: Using API Version  1
	I1031 17:48:21.455428   52975 main.go:134] libmachine: () Calling .SetConfigRaw
	I1031 17:48:21.455831   52975 main.go:134] libmachine: () Calling .GetMachineName
	I1031 17:48:21.456019   52975 main.go:134] libmachine: (functional-174543) Calling .DriverName
	I1031 17:48:21.492837   52975 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I1031 17:48:21.494123   52975 start.go:282] selected driver: kvm2
	I1031 17:48:21.494157   52975 start.go:808] validating driver "kvm2" against &{Name:functional-174543 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15159/minikube-v1.27.0-1666206003-15159-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.35-1666722858-15219@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.25.3 ClusterName:functional-174543 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.39.161 Port:8441 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:fal
se nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1031 17:48:21.494324   52975 start.go:819] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1031 17:48:21.496854   52975 out.go:177] 
	W1031 17:48:21.498341   52975 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1031 17:48:21.499690   52975 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:847: (dbg) Run:  out/minikube-linux-amd64 -p functional-174543 status

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:853: (dbg) Run:  out/minikube-linux-amd64 -p functional-174543 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:865: (dbg) Run:  out/minikube-linux-amd64 -p functional-174543 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.97s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd (10.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd
=== PAUSE TestFunctional/parallel/ServiceCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1433: (dbg) Run:  kubectl --context functional-174543 create deployment hello-node --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1439: (dbg) Run:  kubectl --context functional-174543 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1444: (dbg) TestFunctional/parallel/ServiceCmd: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:342: "hello-node-5fcdfb5cc4-hdp44" [e5b8cf47-3bd5-45fb-8f1b-75daf973ac0c] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
helpers_test.go:342: "hello-node-5fcdfb5cc4-hdp44" [e5b8cf47-3bd5-45fb-8f1b-75daf973ac0c] Running

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1444: (dbg) TestFunctional/parallel/ServiceCmd: app=hello-node healthy within 8.042512626s
functional_test.go:1449: (dbg) Run:  out/minikube-linux-amd64 -p functional-174543 service list

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1463: (dbg) Run:  out/minikube-linux-amd64 -p functional-174543 service --namespace=default --https --url hello-node

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1476: found endpoint: https://192.168.39.161:31411
functional_test.go:1491: (dbg) Run:  out/minikube-linux-amd64 -p functional-174543 service hello-node --url --format={{.IP}}

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1505: (dbg) Run:  out/minikube-linux-amd64 -p functional-174543 service hello-node --url

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1511: found endpoint for hello-node: http://192.168.39.161:31411
--- PASS: TestFunctional/parallel/ServiceCmd (10.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (25.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1559: (dbg) Run:  kubectl --context functional-174543 create deployment hello-node-connect --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1565: (dbg) Run:  kubectl --context functional-174543 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1570: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:342: "hello-node-connect-6458c8fb6f-fv5rv" [085eca93-5290-43e1-bae5-af57a4964a4b] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
helpers_test.go:342: "hello-node-connect-6458c8fb6f-fv5rv" [085eca93-5290-43e1-bae5-af57a4964a4b] Running

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1570: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 25.016145431s
functional_test.go:1579: (dbg) Run:  out/minikube-linux-amd64 -p functional-174543 service hello-node-connect --url

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1585: found endpoint for hello-node-connect: http://192.168.39.161:30709
functional_test.go:1605: http://192.168.39.161:30709: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-6458c8fb6f-fv5rv

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=172.17.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.161:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.161:30709
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (25.59s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1620: (dbg) Run:  out/minikube-linux-amd64 -p functional-174543 addons list
functional_test.go:1632: (dbg) Run:  out/minikube-linux-amd64 -p functional-174543 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (53s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:342: "storage-provisioner" [19636d09-aaba-4b2c-a023-81d0d1d3422f] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.016643168s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-174543 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-174543 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-174543 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-174543 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-174543 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:342: "sp-pod" [6262343c-c3dc-45ed-ac44-2115493f73ad] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:342: "sp-pod" [6262343c-c3dc-45ed-ac44-2115493f73ad] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:342: "sp-pod" [6262343c-c3dc-45ed-ac44-2115493f73ad] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 28.015353352s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-174543 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-174543 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-174543 delete -f testdata/storage-provisioner/pod.yaml: (1.595215701s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-174543 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:342: "sp-pod" [fde97fed-a191-4350-a9c3-577bfabd01f8] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:342: "sp-pod" [fde97fed-a191-4350-a9c3-577bfabd01f8] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:342: "sp-pod" [fde97fed-a191-4350-a9c3-577bfabd01f8] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 16.009013246s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-174543 exec sp-pod -- ls /tmp/mount
E1031 17:49:14.777389   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/addons-174026/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (53.00s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1655: (dbg) Run:  out/minikube-linux-amd64 -p functional-174543 ssh "echo hello"

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1672: (dbg) Run:  out/minikube-linux-amd64 -p functional-174543 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p functional-174543 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p functional-174543 ssh -n functional-174543 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p functional-174543 cp functional-174543:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3754708270/001/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p functional-174543 ssh -n functional-174543 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.11s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (30.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1720: (dbg) Run:  kubectl --context functional-174543 replace --force -f testdata/mysql.yaml
functional_test.go:1726: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:342: "mysql-596b7fcdbf-8tdln" [5c90d661-63d7-4aeb-a947-d6d69c734fd6] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:342: "mysql-596b7fcdbf-8tdln" [5c90d661-63d7-4aeb-a947-d6d69c734fd6] Running

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1726: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 24.014946815s
functional_test.go:1734: (dbg) Run:  kubectl --context functional-174543 exec mysql-596b7fcdbf-8tdln -- mysql -ppassword -e "show databases;"
functional_test.go:1734: (dbg) Non-zero exit: kubectl --context functional-174543 exec mysql-596b7fcdbf-8tdln -- mysql -ppassword -e "show databases;": exit status 1 (199.193143ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1734: (dbg) Run:  kubectl --context functional-174543 exec mysql-596b7fcdbf-8tdln -- mysql -ppassword -e "show databases;"
functional_test.go:1734: (dbg) Non-zero exit: kubectl --context functional-174543 exec mysql-596b7fcdbf-8tdln -- mysql -ppassword -e "show databases;": exit status 1 (197.246289ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1734: (dbg) Run:  kubectl --context functional-174543 exec mysql-596b7fcdbf-8tdln -- mysql -ppassword -e "show databases;"
functional_test.go:1734: (dbg) Non-zero exit: kubectl --context functional-174543 exec mysql-596b7fcdbf-8tdln -- mysql -ppassword -e "show databases;": exit status 1 (132.28515ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1734: (dbg) Run:  kubectl --context functional-174543 exec mysql-596b7fcdbf-8tdln -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (30.38s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1856: Checking for existence of /etc/test/nested/copy/49529/hosts within VM
functional_test.go:1858: (dbg) Run:  out/minikube-linux-amd64 -p functional-174543 ssh "sudo cat /etc/test/nested/copy/49529/hosts"

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1863: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1899: Checking for existence of /etc/ssl/certs/49529.pem within VM
functional_test.go:1900: (dbg) Run:  out/minikube-linux-amd64 -p functional-174543 ssh "sudo cat /etc/ssl/certs/49529.pem"
functional_test.go:1899: Checking for existence of /usr/share/ca-certificates/49529.pem within VM
functional_test.go:1900: (dbg) Run:  out/minikube-linux-amd64 -p functional-174543 ssh "sudo cat /usr/share/ca-certificates/49529.pem"
functional_test.go:1899: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1900: (dbg) Run:  out/minikube-linux-amd64 -p functional-174543 ssh "sudo cat /etc/ssl/certs/51391683.0"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1926: Checking for existence of /etc/ssl/certs/495292.pem within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-174543 ssh "sudo cat /etc/ssl/certs/495292.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1926: Checking for existence of /usr/share/ca-certificates/495292.pem within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-174543 ssh "sudo cat /usr/share/ca-certificates/495292.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1926: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-174543 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.56s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:215: (dbg) Run:  kubectl --context functional-174543 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1954: (dbg) Run:  out/minikube-linux-amd64 -p functional-174543 ssh "sudo systemctl is-active crio"
functional_test.go:1954: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-174543 ssh "sudo systemctl is-active crio": exit status 1 (247.989904ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2215: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-amd64 profile lis

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1271: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (9.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:66: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-174543 /tmp/TestFunctionalparallelMountCmdany-port6154734/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:100: wrote "test-1667238501017488295" to /tmp/TestFunctionalparallelMountCmdany-port6154734/001/created-by-test
functional_test_mount_test.go:100: wrote "test-1667238501017488295" to /tmp/TestFunctionalparallelMountCmdany-port6154734/001/created-by-test-removed-by-pod
functional_test_mount_test.go:100: wrote "test-1667238501017488295" to /tmp/TestFunctionalparallelMountCmdany-port6154734/001/test-1667238501017488295
functional_test_mount_test.go:108: (dbg) Run:  out/minikube-linux-amd64 -p functional-174543 ssh "findmnt -T /mount-9p | grep 9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:108: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-174543 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (281.787168ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:108: (dbg) Run:  out/minikube-linux-amd64 -p functional-174543 ssh "findmnt -T /mount-9p | grep 9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:122: (dbg) Run:  out/minikube-linux-amd64 -p functional-174543 ssh -- ls -la /mount-9p

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:126: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct 31 17:48 created-by-test
-rw-r--r-- 1 docker docker 24 Oct 31 17:48 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct 31 17:48 test-1667238501017488295
functional_test_mount_test.go:130: (dbg) Run:  out/minikube-linux-amd64 -p functional-174543 ssh cat /mount-9p/test-1667238501017488295

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:141: (dbg) Run:  kubectl --context functional-174543 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:146: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:342: "busybox-mount" [227b56c6-3f97-4483-be12-82e6b7e09581] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:342: "busybox-mount" [227b56c6-3f97-4483-be12-82e6b7e09581] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:342: "busybox-mount" [227b56c6-3f97-4483-be12-82e6b7e09581] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:342: "busybox-mount" [227b56c6-3f97-4483-be12-82e6b7e09581] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:146: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 7.0093169s
functional_test_mount_test.go:162: (dbg) Run:  kubectl --context functional-174543 logs busybox-mount
functional_test_mount_test.go:174: (dbg) Run:  out/minikube-linux-amd64 -p functional-174543 ssh stat /mount-9p/created-by-test

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:174: (dbg) Run:  out/minikube-linux-amd64 -p functional-174543 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:83: (dbg) Run:  out/minikube-linux-amd64 -p functional-174543 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:87: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-174543 /tmp/TestFunctionalparallelMountCmdany-port6154734/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (9.60s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-amd64 profile list

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1311: Took "289.237705ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-amd64 profile list -l

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: Took "78.375404ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-amd64 profile list -o json

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1362: Took "274.168788ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1375: Took "90.070724ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:492: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-174543 docker-env) && out/minikube-linux-amd64 status -p functional-174543"

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv/bash
functional_test.go:515: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-174543 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.18s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2183: (dbg) Run:  out/minikube-linux-amd64 -p functional-174543 version --short
--- PASS: TestFunctional/parallel/Version/short (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2197: (dbg) Run:  out/minikube-linux-amd64 -p functional-174543 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:206: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-174543 /tmp/TestFunctionalparallelMountCmdspecific-port3973802554/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:236: (dbg) Run:  out/minikube-linux-amd64 -p functional-174543 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:236: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-174543 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (254.692392ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:236: (dbg) Run:  out/minikube-linux-amd64 -p functional-174543 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:250: (dbg) Run:  out/minikube-linux-amd64 -p functional-174543 ssh -- ls -la /mount-9p
functional_test_mount_test.go:254: guest mount directory contents
total 0
functional_test_mount_test.go:256: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-174543 /tmp/TestFunctionalparallelMountCmdspecific-port3973802554/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:257: reading mount text
functional_test_mount_test.go:271: done reading mount text
functional_test_mount_test.go:223: (dbg) Run:  out/minikube-linux-amd64 -p functional-174543 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:223: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-174543 ssh "sudo umount -f /mount-9p": exit status 1 (229.500324ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:225: "out/minikube-linux-amd64 -p functional-174543 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:227: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-174543 /tmp/TestFunctionalparallelMountCmdspecific-port3973802554/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-174543 image ls --format short

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:262: (dbg) Stdout: out/minikube-linux-amd64 -p functional-174543 image ls --format short:
registry.k8s.io/pause:3.8
registry.k8s.io/kube-scheduler:v1.25.3
registry.k8s.io/kube-proxy:v1.25.3
registry.k8s.io/kube-controller-manager:v1.25.3
registry.k8s.io/kube-apiserver:v1.25.3
registry.k8s.io/etcd:3.5.4-0
registry.k8s.io/coredns/coredns:v1.9.3
k8s.gcr.io/pause:latest
k8s.gcr.io/pause:3.6
k8s.gcr.io/pause:3.3
k8s.gcr.io/pause:3.1
k8s.gcr.io/echoserver:1.8
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-174543
docker.io/library/nginx:latest
docker.io/library/minikube-local-cache-test:functional-174543
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-174543 image ls --format table
functional_test.go:262: (dbg) Stdout: out/minikube-linux-amd64 -p functional-174543 image ls --format table:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| k8s.gcr.io/pause                            | 3.6               | 6270bb605e12e | 683kB  |
| gcr.io/google-containers/addon-resizer      | functional-174543 | ffd4cfbbe753e | 32.9MB |
| k8s.gcr.io/echoserver                       | 1.8               | 82e4c8a736a4f | 95.4MB |
| registry.k8s.io/kube-controller-manager     | v1.25.3           | 6039992312758 | 117MB  |
| registry.k8s.io/coredns/coredns             | v1.9.3            | 5185b96f0becf | 48.8MB |
| registry.k8s.io/kube-scheduler              | v1.25.3           | 6d23ec0e8b87e | 50.6MB |
| docker.io/kubernetesui/dashboard            | <none>            | 07655ddf2eebe | 246MB  |
| docker.io/kubernetesui/metrics-scraper      | <none>            | 115053965e86b | 43.8MB |
| k8s.gcr.io/pause                            | 3.3               | 0184c1613d929 | 683kB  |
| docker.io/library/minikube-local-cache-test | functional-174543 | abd2cdca1c814 | 30B    |
| docker.io/library/nginx                     | latest            | 76c69feac34e8 | 142MB  |
| registry.k8s.io/pause                       | 3.8               | 4873874c08efc | 711kB  |
| registry.k8s.io/etcd                        | 3.5.4-0           | a8a176a5d5d69 | 300MB  |
| registry.k8s.io/kube-apiserver              | v1.25.3           | 0346dbd74bcb9 | 128MB  |
| registry.k8s.io/kube-proxy                  | v1.25.3           | beaaf00edd38a | 61.7MB |
| k8s.gcr.io/pause                            | 3.1               | da86e6ba6ca19 | 742kB  |
| k8s.gcr.io/pause                            | latest            | 350b164e7ae1d | 240kB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
|---------------------------------------------|-------------------|---------------|--------|
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-174543 image ls --format json

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:262: (dbg) Stdout: out/minikube-linux-amd64 -p functional-174543 image ls --format json:
[{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-174543"],"size":"32900000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.3"],"size":"683000"},{"id":"beaaf00edd38a6cb405376588e708084376a6786e722231dc8a1482730e0c041","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.25.3"],"size":"61700000"},{"id":"5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.9.3"],"size":"48800000"},{"id":"6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.6"],"size":"683000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd5
61605a538410","repoDigests":[],"repoTags":["k8s.gcr.io/echoserver:1.8"],"size":"95400000"},{"id":"abd2cdca1c81437ce9696c16a9d5356b308b57e965c6b77c5207040d5f474d10","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-174543"],"size":"30"},{"id":"76c69feac34e85768b284f84416c3546b240e8cb4f68acbbe5ad261a8b36f39f","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"142000000"},{"id":"6d23ec0e8b87eaaa698c3425c2c4d25f7329c587e9b39d967ab3f60048983912","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.25.3"],"size":"50600000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.1"],"size":"742000"},{"id":"4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.8"],"size":"711000"},{"id":"a8a176a5d5d698f9409dc246f81fa69d37d4a2f4132ba5e62e72a78476b27f66","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.4-0"],"size":"3
00000000"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"43800000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["k8s.gcr.io/pause:latest"],"size":"240000"},{"id":"0346dbd74bcb9485bb4da1b33027094d79488470d8d1b9baa4d927db564e4fe0","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.25.3"],"size":"128000000"},{"id":"60399923127581086e9029f30a0c9e3c88708efa8fc05d22d5e33887e7c0310a","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.25.3"],"size":"117000000"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"246000000"}]
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-174543 image ls --format yaml
functional_test.go:262: (dbg) Stdout: out/minikube-linux-amd64 -p functional-174543 image ls --format yaml:
- id: 6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.6
size: "683000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- k8s.gcr.io/echoserver:1.8
size: "95400000"
- id: beaaf00edd38a6cb405376588e708084376a6786e722231dc8a1482730e0c041
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.25.3
size: "61700000"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "43800000"
- id: 5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.9.3
size: "48800000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 76c69feac34e85768b284f84416c3546b240e8cb4f68acbbe5ad261a8b36f39f
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "142000000"
- id: 60399923127581086e9029f30a0c9e3c88708efa8fc05d22d5e33887e7c0310a
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.25.3
size: "117000000"
- id: 4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.8
size: "711000"
- id: a8a176a5d5d698f9409dc246f81fa69d37d4a2f4132ba5e62e72a78476b27f66
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.4-0
size: "300000000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-174543
size: "32900000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.3
size: "683000"
- id: 6d23ec0e8b87eaaa698c3425c2c4d25f7329c587e9b39d967ab3f60048983912
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.25.3
size: "50600000"
- id: 0346dbd74bcb9485bb4da1b33027094d79488470d8d1b9baa4d927db564e4fe0
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.25.3
size: "128000000"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "246000000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.1
size: "742000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- k8s.gcr.io/pause:latest
size: "240000"
- id: abd2cdca1c81437ce9696c16a9d5356b308b57e965c6b77c5207040d5f474d10
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-174543
size: "30"

                                                
                                                
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (5.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p functional-174543 ssh pgrep buildkitd

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:304: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-174543 ssh pgrep buildkitd: exit status 1 (254.163765ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:311: (dbg) Run:  out/minikube-linux-amd64 -p functional-174543 image build -t localhost/my-image:functional-174543 testdata/build

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:311: (dbg) Done: out/minikube-linux-amd64 -p functional-174543 image build -t localhost/my-image:functional-174543 testdata/build: (4.761257364s)
functional_test.go:316: (dbg) Stdout: out/minikube-linux-amd64 -p functional-174543 image build -t localhost/my-image:functional-174543 testdata/build:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
5cc84ad355aa: Pulling fs layer
5cc84ad355aa: Verifying Checksum
5cc84ad355aa: Download complete
5cc84ad355aa: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> beae173ccac6
Step 2/3 : RUN true
---> Running in acf9a8655f43
Removing intermediate container acf9a8655f43
---> 00e6cb6fc563
Step 3/3 : ADD content.txt /
---> 63d6f2c1bb97
Successfully built 63d6f2c1bb97
Successfully tagged localhost/my-image:functional-174543
functional_test.go:444: (dbg) Run:  out/minikube-linux-amd64 -p functional-174543 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (5.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:338: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
E1031 17:48:33.817025   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/addons-174026/client.crt: no such file or directory
functional_test.go:338: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.407478118s)
functional_test.go:343: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-174543
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p functional-174543 image load --daemon gcr.io/google-containers/addon-resizer:functional-174543
functional_test.go:351: (dbg) Done: out/minikube-linux-amd64 -p functional-174543 image load --daemon gcr.io/google-containers/addon-resizer:functional-174543: (4.593832866s)
functional_test.go:444: (dbg) Run:  out/minikube-linux-amd64 -p functional-174543 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.86s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:361: (dbg) Run:  out/minikube-linux-amd64 -p functional-174543 image load --daemon gcr.io/google-containers/addon-resizer:functional-174543
2022/10/31 17:48:39 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:361: (dbg) Done: out/minikube-linux-amd64 -p functional-174543 image load --daemon gcr.io/google-containers/addon-resizer:functional-174543: (2.359003304s)
functional_test.go:444: (dbg) Run:  out/minikube-linux-amd64 -p functional-174543 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.66s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:231: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:231: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.328488288s)
functional_test.go:236: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-174543
functional_test.go:241: (dbg) Run:  out/minikube-linux-amd64 -p functional-174543 image load --daemon gcr.io/google-containers/addon-resizer:functional-174543

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:241: (dbg) Done: out/minikube-linux-amd64 -p functional-174543 image load --daemon gcr.io/google-containers/addon-resizer:functional-174543: (3.578086718s)
functional_test.go:444: (dbg) Run:  out/minikube-linux-amd64 -p functional-174543 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:376: (dbg) Run:  out/minikube-linux-amd64 -p functional-174543 image save gcr.io/google-containers/addon-resizer:functional-174543 /home/jenkins/workspace/KVM_Linux_integration/addon-resizer-save.tar

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:376: (dbg) Done: out/minikube-linux-amd64 -p functional-174543 image save gcr.io/google-containers/addon-resizer:functional-174543 /home/jenkins/workspace/KVM_Linux_integration/addon-resizer-save.tar: (1.872439818s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.87s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:388: (dbg) Run:  out/minikube-linux-amd64 -p functional-174543 image rm gcr.io/google-containers/addon-resizer:functional-174543

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:444: (dbg) Run:  out/minikube-linux-amd64 -p functional-174543 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:405: (dbg) Run:  out/minikube-linux-amd64 -p functional-174543 image load /home/jenkins/workspace/KVM_Linux_integration/addon-resizer-save.tar

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:405: (dbg) Done: out/minikube-linux-amd64 -p functional-174543 image load /home/jenkins/workspace/KVM_Linux_integration/addon-resizer-save.tar: (1.872972082s)
functional_test.go:444: (dbg) Run:  out/minikube-linux-amd64 -p functional-174543 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2046: (dbg) Run:  out/minikube-linux-amd64 -p functional-174543 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2046: (dbg) Run:  out/minikube-linux-amd64 -p functional-174543 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2046: (dbg) Run:  out/minikube-linux-amd64 -p functional-174543 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:415: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-174543
functional_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p functional-174543 image save --daemon gcr.io/google-containers/addon-resizer:functional-174543

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:420: (dbg) Done: out/minikube-linux-amd64 -p functional-174543 image save --daemon gcr.io/google-containers/addon-resizer:functional-174543: (2.843539111s)
functional_test.go:425: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-174543
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.90s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.08s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:186: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:186: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-174543
--- PASS: TestFunctional/delete_addon-resizer_images (0.08s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:194: (dbg) Run:  docker rmi -f localhost/my-image:functional-174543
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:202: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-174543
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestGvisorAddon (337.33s)

                                                
                                                
=== RUN   TestGvisorAddon
=== PAUSE TestGvisorAddon

                                                
                                                

                                                
                                                
=== CONT  TestGvisorAddon

                                                
                                                
=== CONT  TestGvisorAddon
gvisor_addon_test.go:52: (dbg) Run:  out/minikube-linux-amd64 start -p gvisor-183258 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 

                                                
                                                
=== CONT  TestGvisorAddon
gvisor_addon_test.go:52: (dbg) Done: out/minikube-linux-amd64 start -p gvisor-183258 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 : (2m14.811486112s)
gvisor_addon_test.go:58: (dbg) Run:  out/minikube-linux-amd64 -p gvisor-183258 cache add gcr.io/k8s-minikube/gvisor-addon:2

                                                
                                                
=== CONT  TestGvisorAddon
gvisor_addon_test.go:58: (dbg) Done: out/minikube-linux-amd64 -p gvisor-183258 cache add gcr.io/k8s-minikube/gvisor-addon:2: (25.391772535s)
gvisor_addon_test.go:63: (dbg) Run:  out/minikube-linux-amd64 -p gvisor-183258 addons enable gvisor

                                                
                                                
=== CONT  TestGvisorAddon
gvisor_addon_test.go:63: (dbg) Done: out/minikube-linux-amd64 -p gvisor-183258 addons enable gvisor: (8.175846374s)
gvisor_addon_test.go:68: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "kubernetes.io/minikube-addons=gvisor" in namespace "kube-system" ...
helpers_test.go:342: "gvisor" [03f9661e-19a0-458c-b184-779af6709ac0] Running

                                                
                                                
=== CONT  TestGvisorAddon
helpers_test.go:327: TestGvisorAddon: WARNING: pod list for "kube-system" "kubernetes.io/minikube-addons=gvisor" returned: Get "https://192.168.83.112:8443/api/v1/namespaces/kube-system/pods?labelSelector=kubernetes.io%2Fminikube-addons%3Dgvisor": dial tcp 192.168.83.112:8443: connect: connection refused
helpers_test.go:327: TestGvisorAddon: WARNING: pod list for "kube-system" "kubernetes.io/minikube-addons=gvisor" returned: Get "https://192.168.83.112:8443/api/v1/namespaces/kube-system/pods?labelSelector=kubernetes.io%2Fminikube-addons%3Dgvisor": dial tcp 192.168.83.112:8443: connect: connection refused

                                                
                                                
=== CONT  TestGvisorAddon
helpers_test.go:327: TestGvisorAddon: WARNING: pod list for "kube-system" "kubernetes.io/minikube-addons=gvisor" returned: Get "https://192.168.83.112:8443/api/v1/namespaces/kube-system/pods?labelSelector=kubernetes.io%2Fminikube-addons%3Dgvisor": dial tcp 192.168.83.112:8443: connect: connection refused
helpers_test.go:327: TestGvisorAddon: WARNING: pod list for "kube-system" "kubernetes.io/minikube-addons=gvisor" returned: Get "https://192.168.83.112:8443/api/v1/namespaces/kube-system/pods?labelSelector=kubernetes.io%2Fminikube-addons%3Dgvisor": dial tcp 192.168.83.112:8443: connect: connection refused
helpers_test.go:327: TestGvisorAddon: WARNING: pod list for "kube-system" "kubernetes.io/minikube-addons=gvisor" returned: Get "https://192.168.83.112:8443/api/v1/namespaces/kube-system/pods?labelSelector=kubernetes.io%2Fminikube-addons%3Dgvisor": dial tcp 192.168.83.112:8443: connect: connection refused
helpers_test.go:327: TestGvisorAddon: WARNING: pod list for "kube-system" "kubernetes.io/minikube-addons=gvisor" returned: Get "https://192.168.83.112:8443/api/v1/namespaces/kube-system/pods?labelSelector=kubernetes.io%2Fminikube-addons%3Dgvisor": dial tcp 192.168.83.112:8443: connect: connection refused
E1031 18:35:54.237869   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/ingress-addon-legacy-174921/client.crt: no such file or directory
helpers_test.go:327: TestGvisorAddon: WARNING: pod list for "kube-system" "kubernetes.io/minikube-addons=gvisor" returned: Get "https://192.168.83.112:8443/api/v1/namespaces/kube-system/pods?labelSelector=kubernetes.io%2Fminikube-addons%3Dgvisor": dial tcp 192.168.83.112:8443: connect: connection refused
helpers_test.go:327: TestGvisorAddon: WARNING: pod list for "kube-system" "kubernetes.io/minikube-addons=gvisor" returned: Get "https://192.168.83.112:8443/api/v1/namespaces/kube-system/pods?labelSelector=kubernetes.io%2Fminikube-addons%3Dgvisor": dial tcp 192.168.83.112:8443: connect: connection refused
gvisor_addon_test.go:68: (dbg) TestGvisorAddon: kubernetes.io/minikube-addons=gvisor healthy within 20.019137959s
gvisor_addon_test.go:73: (dbg) Run:  kubectl --context gvisor-183258 replace --force -f testdata/nginx-untrusted.yaml
gvisor_addon_test.go:78: (dbg) Run:  kubectl --context gvisor-183258 replace --force -f testdata/nginx-gvisor.yaml
gvisor_addon_test.go:83: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "run=nginx,untrusted=true" in namespace "default" ...
helpers_test.go:342: "nginx-untrusted" [dd615807-6997-4460-b5c1-deab28e1a42a] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:342: "nginx-untrusted" [dd615807-6997-4460-b5c1-deab28e1a42a] Running
helpers_test.go:327: TestGvisorAddon: WARNING: pod list for "default" "run=nginx,untrusted=true" returned: Get "https://192.168.83.112:8443/api/v1/namespaces/default/pods?labelSelector=run%3Dnginx%2Cuntrusted%3Dtrue": dial tcp 192.168.83.112:8443: connect: connection refused
helpers_test.go:327: TestGvisorAddon: WARNING: pod list for "default" "run=nginx,untrusted=true" returned: Get "https://192.168.83.112:8443/api/v1/namespaces/default/pods?labelSelector=run%3Dnginx%2Cuntrusted%3Dtrue": dial tcp 192.168.83.112:8443: connect: connection refused
helpers_test.go:327: TestGvisorAddon: WARNING: pod list for "default" "run=nginx,untrusted=true" returned: Get "https://192.168.83.112:8443/api/v1/namespaces/default/pods?labelSelector=run%3Dnginx%2Cuntrusted%3Dtrue": dial tcp 192.168.83.112:8443: connect: connection refused
helpers_test.go:327: TestGvisorAddon: WARNING: pod list for "default" "run=nginx,untrusted=true" returned: Get "https://192.168.83.112:8443/api/v1/namespaces/default/pods?labelSelector=run%3Dnginx%2Cuntrusted%3Dtrue": dial tcp 192.168.83.112:8443: connect: connection refused
helpers_test.go:327: TestGvisorAddon: WARNING: pod list for "default" "run=nginx,untrusted=true" returned: Get "https://192.168.83.112:8443/api/v1/namespaces/default/pods?labelSelector=run%3Dnginx%2Cuntrusted%3Dtrue": dial tcp 192.168.83.112:8443: connect: connection refused
helpers_test.go:327: TestGvisorAddon: WARNING: pod list for "default" "run=nginx,untrusted=true" returned: Get "https://192.168.83.112:8443/api/v1/namespaces/default/pods?labelSelector=run%3Dnginx%2Cuntrusted%3Dtrue": dial tcp 192.168.83.112:8443: connect: connection refused
helpers_test.go:327: TestGvisorAddon: WARNING: pod list for "default" "run=nginx,untrusted=true" returned: Get "https://192.168.83.112:8443/api/v1/namespaces/default/pods?labelSelector=run%3Dnginx%2Cuntrusted%3Dtrue": dial tcp 192.168.83.112:8443: connect: connection refused
helpers_test.go:327: TestGvisorAddon: WARNING: pod list for "default" "run=nginx,untrusted=true" returned: Get "https://192.168.83.112:8443/api/v1/namespaces/default/pods?labelSelector=run%3Dnginx%2Cuntrusted%3Dtrue": dial tcp 192.168.83.112:8443: connect: connection refused
helpers_test.go:327: TestGvisorAddon: WARNING: pod list for "default" "run=nginx,untrusted=true" returned: Get "https://192.168.83.112:8443/api/v1/namespaces/default/pods?labelSelector=run%3Dnginx%2Cuntrusted%3Dtrue": dial tcp 192.168.83.112:8443: connect: connection refused
helpers_test.go:327: TestGvisorAddon: WARNING: pod list for "default" "run=nginx,untrusted=true" returned: Get "https://192.168.83.112:8443/api/v1/namespaces/default/pods?labelSelector=run%3Dnginx%2Cuntrusted%3Dtrue": dial tcp 192.168.83.112:8443: connect: connection refused
helpers_test.go:327: TestGvisorAddon: WARNING: pod list for "default" "run=nginx,untrusted=true" returned: Get "https://192.168.83.112:8443/api/v1/namespaces/default/pods?labelSelector=run%3Dnginx%2Cuntrusted%3Dtrue": dial tcp 192.168.83.112:8443: connect: connection refused
helpers_test.go:327: TestGvisorAddon: WARNING: pod list for "default" "run=nginx,untrusted=true" returned: Get "https://192.168.83.112:8443/api/v1/namespaces/default/pods?labelSelector=run%3Dnginx%2Cuntrusted%3Dtrue": dial tcp 192.168.83.112:8443: connect: connection refused
helpers_test.go:327: TestGvisorAddon: WARNING: pod list for "default" "run=nginx,untrusted=true" returned: Get "https://192.168.83.112:8443/api/v1/namespaces/default/pods?labelSelector=run%3Dnginx%2Cuntrusted%3Dtrue": dial tcp 192.168.83.112:8443: connect: connection refused
helpers_test.go:327: TestGvisorAddon: WARNING: pod list for "default" "run=nginx,untrusted=true" returned: Get "https://192.168.83.112:8443/api/v1/namespaces/default/pods?labelSelector=run%3Dnginx%2Cuntrusted%3Dtrue": dial tcp 192.168.83.112:8443: connect: connection refused
helpers_test.go:327: TestGvisorAddon: WARNING: pod list for "default" "run=nginx,untrusted=true" returned: Get "https://192.168.83.112:8443/api/v1/namespaces/default/pods?labelSelector=run%3Dnginx%2Cuntrusted%3Dtrue": dial tcp 192.168.83.112:8443: connect: connection refused
helpers_test.go:327: TestGvisorAddon: WARNING: pod list for "default" "run=nginx,untrusted=true" returned: Get "https://192.168.83.112:8443/api/v1/namespaces/default/pods?labelSelector=run%3Dnginx%2Cuntrusted%3Dtrue": dial tcp 192.168.83.112:8443: connect: connection refused
gvisor_addon_test.go:83: (dbg) TestGvisorAddon: run=nginx,untrusted=true healthy within 40.017628142s
gvisor_addon_test.go:86: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "run=nginx,runtime=gvisor" in namespace "default" ...
helpers_test.go:342: "nginx-gvisor" [0cc6f3eb-4979-4046-971a-0d4effc253ae] Running
gvisor_addon_test.go:86: (dbg) TestGvisorAddon: run=nginx,runtime=gvisor healthy within 5.009009669s
gvisor_addon_test.go:91: (dbg) Run:  out/minikube-linux-amd64 stop -p gvisor-183258
gvisor_addon_test.go:91: (dbg) Done: out/minikube-linux-amd64 stop -p gvisor-183258: (3.163952936s)
gvisor_addon_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p gvisor-183258 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 

                                                
                                                
=== CONT  TestGvisorAddon
gvisor_addon_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p gvisor-183258 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 : (1m23.273930114s)
gvisor_addon_test.go:100: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "kubernetes.io/minikube-addons=gvisor" in namespace "kube-system" ...
helpers_test.go:342: "gvisor" [03f9661e-19a0-458c-b184-779af6709ac0] Running
E1031 18:38:21.939902   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/functional-174543/client.crt: no such file or directory
gvisor_addon_test.go:100: (dbg) TestGvisorAddon: kubernetes.io/minikube-addons=gvisor healthy within 5.01862783s
gvisor_addon_test.go:103: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "run=nginx,untrusted=true" in namespace "default" ...
helpers_test.go:342: "nginx-untrusted" [dd615807-6997-4460-b5c1-deab28e1a42a] Running

                                                
                                                
=== CONT  TestGvisorAddon
gvisor_addon_test.go:103: (dbg) TestGvisorAddon: run=nginx,untrusted=true healthy within 5.008076552s
gvisor_addon_test.go:106: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "run=nginx,runtime=gvisor" in namespace "default" ...
helpers_test.go:342: "nginx-gvisor" [0cc6f3eb-4979-4046-971a-0d4effc253ae] Running

                                                
                                                
=== CONT  TestGvisorAddon
gvisor_addon_test.go:106: (dbg) TestGvisorAddon: run=nginx,runtime=gvisor healthy within 5.010356733s
helpers_test.go:175: Cleaning up "gvisor-183258" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p gvisor-183258
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p gvisor-183258: (1.242528963s)
--- PASS: TestGvisorAddon (337.33s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (74.05s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-174921 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2 
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-174921 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2 : (1m14.048906767s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (74.05s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (18.28s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-174921 addons enable ingress --alsologtostderr -v=5
E1031 17:50:36.698376   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/addons-174026/client.crt: no such file or directory
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-174921 addons enable ingress --alsologtostderr -v=5: (18.283509885s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (18.28s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.48s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-174921 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.48s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (38.76s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:165: (dbg) Run:  kubectl --context ingress-addon-legacy-174921 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:165: (dbg) Done: kubectl --context ingress-addon-legacy-174921 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (12.308738332s)
addons_test.go:185: (dbg) Run:  kubectl --context ingress-addon-legacy-174921 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:198: (dbg) Run:  kubectl --context ingress-addon-legacy-174921 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:203: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:342: "nginx" [e792519d-1f23-44ca-b1ba-c1ed2e3cfb33] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:342: "nginx" [e792519d-1f23-44ca-b1ba-c1ed2e3cfb33] Running
addons_test.go:203: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 13.00631586s
addons_test.go:215: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-174921 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:239: (dbg) Run:  kubectl --context ingress-addon-legacy-174921 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-174921 ip
addons_test.go:250: (dbg) Run:  nslookup hello-john.test 192.168.39.78
addons_test.go:259: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-174921 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:259: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-174921 addons disable ingress-dns --alsologtostderr -v=1: (4.82382783s)
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-174921 addons disable ingress --alsologtostderr -v=1
addons_test.go:264: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-174921 addons disable ingress --alsologtostderr -v=1: (7.366069261s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddons (38.76s)

                                                
                                    
x
+
TestJSONOutput/start/Command (71.33s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-175134 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2 
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-175134 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2 : (1m11.325430252s)
--- PASS: TestJSONOutput/start/Command (71.33s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.62s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-175134 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.62s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.6s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-175134 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.60s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (8.12s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-175134 --output=json --user=testUser
E1031 17:52:52.853598   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/addons-174026/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-175134 --output=json --user=testUser: (8.122580804s)
--- PASS: TestJSONOutput/stop/Command (8.12s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.26s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:149: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-175255 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-175255 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (84.243975ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"f23857c9-6443-4a23-9491-3c7662829ba0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-175255] minikube v1.27.1 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"1f637111-542b-41de-b48e-b19028108aee","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=15242"}}
	{"specversion":"1.0","id":"5bbad608-61e8-447d-991b-49e31e1707d7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"1310cb71-66f1-444a-9632-08cd96abcb0e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/15242-42743/kubeconfig"}}
	{"specversion":"1.0","id":"c5420f30-23e1-40d2-961e-e4871ce30272","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/15242-42743/.minikube"}}
	{"specversion":"1.0","id":"d4a085fd-0da9-478b-83b9-d9ebffb1a015","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"95681bd2-de6d-42e1-963b-fee121a92916","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-175255" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-175255
--- PASS: TestErrorJSONOutput (0.26s)

                                                
                                    
x
+
TestMainNoArgs (0.07s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.07s)

                                                
                                    
x
+
TestMinikubeProfile (111.96s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-175255 --driver=kvm2 
E1031 17:53:20.539198   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/addons-174026/client.crt: no such file or directory
E1031 17:53:21.940672   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/functional-174543/client.crt: no such file or directory
E1031 17:53:21.945933   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/functional-174543/client.crt: no such file or directory
E1031 17:53:21.956203   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/functional-174543/client.crt: no such file or directory
E1031 17:53:21.976487   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/functional-174543/client.crt: no such file or directory
E1031 17:53:22.016827   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/functional-174543/client.crt: no such file or directory
E1031 17:53:22.097223   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/functional-174543/client.crt: no such file or directory
E1031 17:53:22.257689   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/functional-174543/client.crt: no such file or directory
E1031 17:53:22.578260   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/functional-174543/client.crt: no such file or directory
E1031 17:53:23.219404   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/functional-174543/client.crt: no such file or directory
E1031 17:53:24.500080   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/functional-174543/client.crt: no such file or directory
E1031 17:53:27.060760   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/functional-174543/client.crt: no such file or directory
E1031 17:53:32.181535   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/functional-174543/client.crt: no such file or directory
E1031 17:53:42.422746   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/functional-174543/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-175255 --driver=kvm2 : (52.232440992s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-175255 --driver=kvm2 
E1031 17:54:02.903859   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/functional-174543/client.crt: no such file or directory
E1031 17:54:43.864297   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/functional-174543/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-175255 --driver=kvm2 : (56.761331662s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-175255
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-175255
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-175255" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-175255
helpers_test.go:175: Cleaning up "first-175255" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-175255
--- PASS: TestMinikubeProfile (111.96s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (27.77s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-175447 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2 
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-175447 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2 : (26.764858999s)
--- PASS: TestMountStart/serial/StartWithMountFirst (27.77s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.43s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-175447 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-175447 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.43s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (27.82s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-175447 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2 
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-175447 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2 : (26.816775612s)
--- PASS: TestMountStart/serial/StartWithMountSecond (27.82s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.43s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-175447 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-175447 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.43s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.89s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-175447 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.89s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.44s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-175447 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-175447 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.44s)

                                                
                                    
x
+
TestMountStart/serial/Stop (2.38s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-175447
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-175447: (2.380449484s)
--- PASS: TestMountStart/serial/Stop (2.38s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (22.84s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-175447
E1031 17:55:54.238633   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/ingress-addon-legacy-174921/client.crt: no such file or directory
E1031 17:55:54.243912   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/ingress-addon-legacy-174921/client.crt: no such file or directory
E1031 17:55:54.254302   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/ingress-addon-legacy-174921/client.crt: no such file or directory
E1031 17:55:54.274599   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/ingress-addon-legacy-174921/client.crt: no such file or directory
E1031 17:55:54.314952   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/ingress-addon-legacy-174921/client.crt: no such file or directory
E1031 17:55:54.395295   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/ingress-addon-legacy-174921/client.crt: no such file or directory
E1031 17:55:54.555739   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/ingress-addon-legacy-174921/client.crt: no such file or directory
E1031 17:55:54.876338   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/ingress-addon-legacy-174921/client.crt: no such file or directory
E1031 17:55:55.517342   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/ingress-addon-legacy-174921/client.crt: no such file or directory
E1031 17:55:56.797874   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/ingress-addon-legacy-174921/client.crt: no such file or directory
E1031 17:55:59.358821   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/ingress-addon-legacy-174921/client.crt: no such file or directory
E1031 17:56:04.479526   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/ingress-addon-legacy-174921/client.crt: no such file or directory
E1031 17:56:05.785135   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/functional-174543/client.crt: no such file or directory
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-175447: (21.836961265s)
--- PASS: TestMountStart/serial/RestartStopped (22.84s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.42s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-175447 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-175447 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.42s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (159.86s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-175611 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2 
E1031 17:56:14.720565   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/ingress-addon-legacy-174921/client.crt: no such file or directory
E1031 17:56:35.201584   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/ingress-addon-legacy-174921/client.crt: no such file or directory
E1031 17:57:16.162715   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/ingress-addon-legacy-174921/client.crt: no such file or directory
E1031 17:57:52.853541   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/addons-174026/client.crt: no such file or directory
E1031 17:58:21.939924   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/functional-174543/client.crt: no such file or directory
E1031 17:58:38.083353   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/ingress-addon-legacy-174921/client.crt: no such file or directory
E1031 17:58:49.626303   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/functional-174543/client.crt: no such file or directory
multinode_test.go:83: (dbg) Done: out/minikube-linux-amd64 start -p multinode-175611 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2 : (2m39.435890784s)
multinode_test.go:89: (dbg) Run:  out/minikube-linux-amd64 -p multinode-175611 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (159.86s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-175611 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-175611 -- rollout status deployment/busybox
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-175611 -- rollout status deployment/busybox: (3.259252958s)
multinode_test.go:490: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-175611 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:502: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-175611 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:510: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-175611 -- exec busybox-65db55d5d6-m9bbn -- nslookup kubernetes.io
multinode_test.go:510: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-175611 -- exec busybox-65db55d5d6-p6579 -- nslookup kubernetes.io
multinode_test.go:520: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-175611 -- exec busybox-65db55d5d6-m9bbn -- nslookup kubernetes.default
multinode_test.go:520: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-175611 -- exec busybox-65db55d5d6-p6579 -- nslookup kubernetes.default
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-175611 -- exec busybox-65db55d5d6-m9bbn -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-175611 -- exec busybox-65db55d5d6-p6579 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.12s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:538: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-175611 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-175611 -- exec busybox-65db55d5d6-m9bbn -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-175611 -- exec busybox-65db55d5d6-m9bbn -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-175611 -- exec busybox-65db55d5d6-p6579 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-175611 -- exec busybox-65db55d5d6-p6579 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.99s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (63.52s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:108: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-175611 -v 3 --alsologtostderr
multinode_test.go:108: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-175611 -v 3 --alsologtostderr: (1m2.908588303s)
multinode_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p multinode-175611 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (63.52s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:130: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.24s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (8.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p multinode-175611 status --output json --alsologtostderr
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-175611 cp testdata/cp-test.txt multinode-175611:/home/docker/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-175611 ssh -n multinode-175611 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-175611 cp multinode-175611:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3470561963/001/cp-test_multinode-175611.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-175611 ssh -n multinode-175611 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-175611 cp multinode-175611:/home/docker/cp-test.txt multinode-175611-m02:/home/docker/cp-test_multinode-175611_multinode-175611-m02.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-175611 ssh -n multinode-175611 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-175611 ssh -n multinode-175611-m02 "sudo cat /home/docker/cp-test_multinode-175611_multinode-175611-m02.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-175611 cp multinode-175611:/home/docker/cp-test.txt multinode-175611-m03:/home/docker/cp-test_multinode-175611_multinode-175611-m03.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-175611 ssh -n multinode-175611 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-175611 ssh -n multinode-175611-m03 "sudo cat /home/docker/cp-test_multinode-175611_multinode-175611-m03.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-175611 cp testdata/cp-test.txt multinode-175611-m02:/home/docker/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-175611 ssh -n multinode-175611-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-175611 cp multinode-175611-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3470561963/001/cp-test_multinode-175611-m02.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-175611 ssh -n multinode-175611-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-175611 cp multinode-175611-m02:/home/docker/cp-test.txt multinode-175611:/home/docker/cp-test_multinode-175611-m02_multinode-175611.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-175611 ssh -n multinode-175611-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-175611 ssh -n multinode-175611 "sudo cat /home/docker/cp-test_multinode-175611-m02_multinode-175611.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-175611 cp multinode-175611-m02:/home/docker/cp-test.txt multinode-175611-m03:/home/docker/cp-test_multinode-175611-m02_multinode-175611-m03.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-175611 ssh -n multinode-175611-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-175611 ssh -n multinode-175611-m03 "sudo cat /home/docker/cp-test_multinode-175611-m02_multinode-175611-m03.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-175611 cp testdata/cp-test.txt multinode-175611-m03:/home/docker/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-175611 ssh -n multinode-175611-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-175611 cp multinode-175611-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3470561963/001/cp-test_multinode-175611-m03.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-175611 ssh -n multinode-175611-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-175611 cp multinode-175611-m03:/home/docker/cp-test.txt multinode-175611:/home/docker/cp-test_multinode-175611-m03_multinode-175611.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-175611 ssh -n multinode-175611-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-175611 ssh -n multinode-175611 "sudo cat /home/docker/cp-test_multinode-175611-m03_multinode-175611.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-175611 cp multinode-175611-m03:/home/docker/cp-test.txt multinode-175611-m02:/home/docker/cp-test_multinode-175611-m03_multinode-175611-m02.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-175611 ssh -n multinode-175611-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-175611 ssh -n multinode-175611-m02 "sudo cat /home/docker/cp-test_multinode-175611-m03_multinode-175611-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (8.24s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (4.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:208: (dbg) Run:  out/minikube-linux-amd64 -p multinode-175611 node stop m03
multinode_test.go:208: (dbg) Done: out/minikube-linux-amd64 -p multinode-175611 node stop m03: (3.108574818s)
multinode_test.go:214: (dbg) Run:  out/minikube-linux-amd64 -p multinode-175611 status
multinode_test.go:214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-175611 status: exit status 7 (462.257574ms)

                                                
                                                
-- stdout --
	multinode-175611
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-175611-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-175611-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:221: (dbg) Run:  out/minikube-linux-amd64 -p multinode-175611 status --alsologtostderr
multinode_test.go:221: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-175611 status --alsologtostderr: exit status 7 (489.140633ms)

                                                
                                                
-- stdout --
	multinode-175611
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-175611-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-175611-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1031 18:00:13.573491   60359 out.go:296] Setting OutFile to fd 1 ...
	I1031 18:00:13.573605   60359 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 18:00:13.573614   60359 out.go:309] Setting ErrFile to fd 2...
	I1031 18:00:13.573619   60359 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 18:00:13.573735   60359 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15242-42743/.minikube/bin
	I1031 18:00:13.573905   60359 out.go:303] Setting JSON to false
	I1031 18:00:13.573941   60359 mustload.go:65] Loading cluster: multinode-175611
	I1031 18:00:13.573993   60359 notify.go:220] Checking for updates...
	I1031 18:00:13.574267   60359 config.go:180] Loaded profile config "multinode-175611": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1031 18:00:13.574283   60359 status.go:255] checking status of multinode-175611 ...
	I1031 18:00:13.574643   60359 main.go:134] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1031 18:00:13.574704   60359 main.go:134] libmachine: Launching plugin server for driver kvm2
	I1031 18:00:13.591502   60359 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:40291
	I1031 18:00:13.592035   60359 main.go:134] libmachine: () Calling .GetVersion
	I1031 18:00:13.593043   60359 main.go:134] libmachine: Using API Version  1
	I1031 18:00:13.593073   60359 main.go:134] libmachine: () Calling .SetConfigRaw
	I1031 18:00:13.593401   60359 main.go:134] libmachine: () Calling .GetMachineName
	I1031 18:00:13.593623   60359 main.go:134] libmachine: (multinode-175611) Calling .GetState
	I1031 18:00:13.595270   60359 status.go:330] multinode-175611 host status = "Running" (err=<nil>)
	I1031 18:00:13.595284   60359 host.go:66] Checking if "multinode-175611" exists ...
	I1031 18:00:13.595545   60359 main.go:134] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1031 18:00:13.595573   60359 main.go:134] libmachine: Launching plugin server for driver kvm2
	I1031 18:00:13.611645   60359 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:35609
	I1031 18:00:13.612125   60359 main.go:134] libmachine: () Calling .GetVersion
	I1031 18:00:13.612683   60359 main.go:134] libmachine: Using API Version  1
	I1031 18:00:13.612717   60359 main.go:134] libmachine: () Calling .SetConfigRaw
	I1031 18:00:13.613096   60359 main.go:134] libmachine: () Calling .GetMachineName
	I1031 18:00:13.613294   60359 main.go:134] libmachine: (multinode-175611) Calling .GetIP
	I1031 18:00:13.616562   60359 main.go:134] libmachine: (multinode-175611) DBG | domain multinode-175611 has defined MAC address 52:54:00:e0:a7:be in network mk-multinode-175611
	I1031 18:00:13.617057   60359 main.go:134] libmachine: (multinode-175611) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:a7:be", ip: ""} in network mk-multinode-175611: {Iface:virbr1 ExpiryTime:2022-10-31 18:56:26 +0000 UTC Type:0 Mac:52:54:00:e0:a7:be Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:multinode-175611 Clientid:01:52:54:00:e0:a7:be}
	I1031 18:00:13.617082   60359 main.go:134] libmachine: (multinode-175611) DBG | domain multinode-175611 has defined IP address 192.168.39.114 and MAC address 52:54:00:e0:a7:be in network mk-multinode-175611
	I1031 18:00:13.617271   60359 host.go:66] Checking if "multinode-175611" exists ...
	I1031 18:00:13.617681   60359 main.go:134] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1031 18:00:13.617720   60359 main.go:134] libmachine: Launching plugin server for driver kvm2
	I1031 18:00:13.633868   60359 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:36413
	I1031 18:00:13.634352   60359 main.go:134] libmachine: () Calling .GetVersion
	I1031 18:00:13.634827   60359 main.go:134] libmachine: Using API Version  1
	I1031 18:00:13.634855   60359 main.go:134] libmachine: () Calling .SetConfigRaw
	I1031 18:00:13.635230   60359 main.go:134] libmachine: () Calling .GetMachineName
	I1031 18:00:13.635471   60359 main.go:134] libmachine: (multinode-175611) Calling .DriverName
	I1031 18:00:13.635703   60359 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1031 18:00:13.635736   60359 main.go:134] libmachine: (multinode-175611) Calling .GetSSHHostname
	I1031 18:00:13.638928   60359 main.go:134] libmachine: (multinode-175611) DBG | domain multinode-175611 has defined MAC address 52:54:00:e0:a7:be in network mk-multinode-175611
	I1031 18:00:13.639414   60359 main.go:134] libmachine: (multinode-175611) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e0:a7:be", ip: ""} in network mk-multinode-175611: {Iface:virbr1 ExpiryTime:2022-10-31 18:56:26 +0000 UTC Type:0 Mac:52:54:00:e0:a7:be Iaid: IPaddr:192.168.39.114 Prefix:24 Hostname:multinode-175611 Clientid:01:52:54:00:e0:a7:be}
	I1031 18:00:13.639446   60359 main.go:134] libmachine: (multinode-175611) DBG | domain multinode-175611 has defined IP address 192.168.39.114 and MAC address 52:54:00:e0:a7:be in network mk-multinode-175611
	I1031 18:00:13.639571   60359 main.go:134] libmachine: (multinode-175611) Calling .GetSSHPort
	I1031 18:00:13.639764   60359 main.go:134] libmachine: (multinode-175611) Calling .GetSSHKeyPath
	I1031 18:00:13.639905   60359 main.go:134] libmachine: (multinode-175611) Calling .GetSSHUsername
	I1031 18:00:13.640103   60359 sshutil.go:53] new ssh client: &{IP:192.168.39.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15242-42743/.minikube/machines/multinode-175611/id_rsa Username:docker}
	I1031 18:00:13.736466   60359 ssh_runner.go:195] Run: systemctl --version
	I1031 18:00:13.743104   60359 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1031 18:00:13.757094   60359 kubeconfig.go:92] found "multinode-175611" server: "https://192.168.39.114:8443"
	I1031 18:00:13.757138   60359 api_server.go:165] Checking apiserver status ...
	I1031 18:00:13.757180   60359 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1031 18:00:13.780053   60359 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1709/cgroup
	I1031 18:00:13.789053   60359 api_server.go:181] apiserver freezer: "5:freezer:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddde110c17bfb20f517be02bbfcb99a7b.slice/docker-a191d1bd7d53d8a6292b93f31c60b987609f648bf04f5597437141b7c6ea088c.scope"
	I1031 18:00:13.789131   60359 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddde110c17bfb20f517be02bbfcb99a7b.slice/docker-a191d1bd7d53d8a6292b93f31c60b987609f648bf04f5597437141b7c6ea088c.scope/freezer.state
	I1031 18:00:13.805978   60359 api_server.go:203] freezer state: "THAWED"
	I1031 18:00:13.806012   60359 api_server.go:252] Checking apiserver healthz at https://192.168.39.114:8443/healthz ...
	I1031 18:00:13.812059   60359 api_server.go:278] https://192.168.39.114:8443/healthz returned 200:
	ok
	I1031 18:00:13.812088   60359 status.go:421] multinode-175611 apiserver status = Running (err=<nil>)
	I1031 18:00:13.812098   60359 status.go:257] multinode-175611 status: &{Name:multinode-175611 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1031 18:00:13.812113   60359 status.go:255] checking status of multinode-175611-m02 ...
	I1031 18:00:13.812404   60359 main.go:134] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1031 18:00:13.812430   60359 main.go:134] libmachine: Launching plugin server for driver kvm2
	I1031 18:00:13.827987   60359 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:39305
	I1031 18:00:13.828430   60359 main.go:134] libmachine: () Calling .GetVersion
	I1031 18:00:13.829051   60359 main.go:134] libmachine: Using API Version  1
	I1031 18:00:13.829077   60359 main.go:134] libmachine: () Calling .SetConfigRaw
	I1031 18:00:13.829374   60359 main.go:134] libmachine: () Calling .GetMachineName
	I1031 18:00:13.829621   60359 main.go:134] libmachine: (multinode-175611-m02) Calling .GetState
	I1031 18:00:13.831328   60359 status.go:330] multinode-175611-m02 host status = "Running" (err=<nil>)
	I1031 18:00:13.831348   60359 host.go:66] Checking if "multinode-175611-m02" exists ...
	I1031 18:00:13.831681   60359 main.go:134] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1031 18:00:13.831743   60359 main.go:134] libmachine: Launching plugin server for driver kvm2
	I1031 18:00:13.846967   60359 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:45507
	I1031 18:00:13.847441   60359 main.go:134] libmachine: () Calling .GetVersion
	I1031 18:00:13.847943   60359 main.go:134] libmachine: Using API Version  1
	I1031 18:00:13.847976   60359 main.go:134] libmachine: () Calling .SetConfigRaw
	I1031 18:00:13.848336   60359 main.go:134] libmachine: () Calling .GetMachineName
	I1031 18:00:13.848522   60359 main.go:134] libmachine: (multinode-175611-m02) Calling .GetIP
	I1031 18:00:13.851489   60359 main.go:134] libmachine: (multinode-175611-m02) DBG | domain multinode-175611-m02 has defined MAC address 52:54:00:4f:99:b1 in network mk-multinode-175611
	I1031 18:00:13.851864   60359 main.go:134] libmachine: (multinode-175611-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:99:b1", ip: ""} in network mk-multinode-175611: {Iface:virbr1 ExpiryTime:2022-10-31 18:57:53 +0000 UTC Type:0 Mac:52:54:00:4f:99:b1 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:multinode-175611-m02 Clientid:01:52:54:00:4f:99:b1}
	I1031 18:00:13.851929   60359 main.go:134] libmachine: (multinode-175611-m02) DBG | domain multinode-175611-m02 has defined IP address 192.168.39.195 and MAC address 52:54:00:4f:99:b1 in network mk-multinode-175611
	I1031 18:00:13.852118   60359 host.go:66] Checking if "multinode-175611-m02" exists ...
	I1031 18:00:13.852417   60359 main.go:134] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1031 18:00:13.852445   60359 main.go:134] libmachine: Launching plugin server for driver kvm2
	I1031 18:00:13.867743   60359 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:32811
	I1031 18:00:13.868177   60359 main.go:134] libmachine: () Calling .GetVersion
	I1031 18:00:13.868715   60359 main.go:134] libmachine: Using API Version  1
	I1031 18:00:13.868744   60359 main.go:134] libmachine: () Calling .SetConfigRaw
	I1031 18:00:13.869037   60359 main.go:134] libmachine: () Calling .GetMachineName
	I1031 18:00:13.869231   60359 main.go:134] libmachine: (multinode-175611-m02) Calling .DriverName
	I1031 18:00:13.869448   60359 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1031 18:00:13.869481   60359 main.go:134] libmachine: (multinode-175611-m02) Calling .GetSSHHostname
	I1031 18:00:13.872359   60359 main.go:134] libmachine: (multinode-175611-m02) DBG | domain multinode-175611-m02 has defined MAC address 52:54:00:4f:99:b1 in network mk-multinode-175611
	I1031 18:00:13.872807   60359 main.go:134] libmachine: (multinode-175611-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4f:99:b1", ip: ""} in network mk-multinode-175611: {Iface:virbr1 ExpiryTime:2022-10-31 18:57:53 +0000 UTC Type:0 Mac:52:54:00:4f:99:b1 Iaid: IPaddr:192.168.39.195 Prefix:24 Hostname:multinode-175611-m02 Clientid:01:52:54:00:4f:99:b1}
	I1031 18:00:13.872834   60359 main.go:134] libmachine: (multinode-175611-m02) DBG | domain multinode-175611-m02 has defined IP address 192.168.39.195 and MAC address 52:54:00:4f:99:b1 in network mk-multinode-175611
	I1031 18:00:13.873005   60359 main.go:134] libmachine: (multinode-175611-m02) Calling .GetSSHPort
	I1031 18:00:13.873184   60359 main.go:134] libmachine: (multinode-175611-m02) Calling .GetSSHKeyPath
	I1031 18:00:13.873372   60359 main.go:134] libmachine: (multinode-175611-m02) Calling .GetSSHUsername
	I1031 18:00:13.873538   60359 sshutil.go:53] new ssh client: &{IP:192.168.39.195 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15242-42743/.minikube/machines/multinode-175611-m02/id_rsa Username:docker}
	I1031 18:00:13.959859   60359 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1031 18:00:13.972232   60359 status.go:257] multinode-175611-m02 status: &{Name:multinode-175611-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1031 18:00:13.972278   60359 status.go:255] checking status of multinode-175611-m03 ...
	I1031 18:00:13.972646   60359 main.go:134] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1031 18:00:13.972682   60359 main.go:134] libmachine: Launching plugin server for driver kvm2
	I1031 18:00:13.988587   60359 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:39785
	I1031 18:00:13.989058   60359 main.go:134] libmachine: () Calling .GetVersion
	I1031 18:00:13.989545   60359 main.go:134] libmachine: Using API Version  1
	I1031 18:00:13.989572   60359 main.go:134] libmachine: () Calling .SetConfigRaw
	I1031 18:00:13.989934   60359 main.go:134] libmachine: () Calling .GetMachineName
	I1031 18:00:13.990177   60359 main.go:134] libmachine: (multinode-175611-m03) Calling .GetState
	I1031 18:00:13.991818   60359 status.go:330] multinode-175611-m03 host status = "Stopped" (err=<nil>)
	I1031 18:00:13.991841   60359 status.go:343] host is not running, skipping remaining checks
	I1031 18:00:13.991849   60359 status.go:257] multinode-175611-m03 status: &{Name:multinode-175611-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (4.06s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (31.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:252: (dbg) Run:  out/minikube-linux-amd64 -p multinode-175611 node start m03 --alsologtostderr
multinode_test.go:252: (dbg) Done: out/minikube-linux-amd64 -p multinode-175611 node start m03 --alsologtostderr: (30.619367425s)
multinode_test.go:259: (dbg) Run:  out/minikube-linux-amd64 -p multinode-175611 status
multinode_test.go:273: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (31.28s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (901.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:281: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-175611
multinode_test.go:288: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-175611
E1031 18:00:54.238616   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/ingress-addon-legacy-174921/client.crt: no such file or directory
multinode_test.go:288: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-175611: (18.512172553s)
multinode_test.go:293: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-175611 --wait=true -v=8 --alsologtostderr
E1031 18:01:21.924056   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/ingress-addon-legacy-174921/client.crt: no such file or directory
E1031 18:02:52.853935   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/addons-174026/client.crt: no such file or directory
E1031 18:03:21.940684   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/functional-174543/client.crt: no such file or directory
E1031 18:04:15.899480   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/addons-174026/client.crt: no such file or directory
E1031 18:05:54.238169   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/ingress-addon-legacy-174921/client.crt: no such file or directory
E1031 18:07:52.853949   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/addons-174026/client.crt: no such file or directory
E1031 18:08:21.940044   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/functional-174543/client.crt: no such file or directory
E1031 18:09:44.986494   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/functional-174543/client.crt: no such file or directory
E1031 18:10:54.237395   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/ingress-addon-legacy-174921/client.crt: no such file or directory
E1031 18:12:17.284929   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/ingress-addon-legacy-174921/client.crt: no such file or directory
E1031 18:12:52.853872   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/addons-174026/client.crt: no such file or directory
E1031 18:13:21.940639   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/functional-174543/client.crt: no such file or directory
multinode_test.go:293: (dbg) Done: out/minikube-linux-amd64 start -p multinode-175611 --wait=true -v=8 --alsologtostderr: (14m43.30311695s)
multinode_test.go:298: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-175611
--- PASS: TestMultiNode/serial/RestartKeepsNodes (901.96s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (3.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p multinode-175611 node delete m03
multinode_test.go:392: (dbg) Done: out/minikube-linux-amd64 -p multinode-175611 node delete m03: (3.276151422s)
multinode_test.go:398: (dbg) Run:  out/minikube-linux-amd64 -p multinode-175611 status --alsologtostderr
multinode_test.go:422: (dbg) Run:  kubectl get nodes
multinode_test.go:430: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (3.83s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (5.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:312: (dbg) Run:  out/minikube-linux-amd64 -p multinode-175611 stop
E1031 18:15:54.237750   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/ingress-addon-legacy-174921/client.crt: no such file or directory
multinode_test.go:312: (dbg) Done: out/minikube-linux-amd64 -p multinode-175611 stop: (5.470070237s)
multinode_test.go:318: (dbg) Run:  out/minikube-linux-amd64 -p multinode-175611 status
multinode_test.go:318: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-175611 status: exit status 7 (102.857162ms)

                                                
                                                
-- stdout --
	multinode-175611
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-175611-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p multinode-175611 status --alsologtostderr
multinode_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-175611 status --alsologtostderr: exit status 7 (103.968265ms)

                                                
                                                
-- stdout --
	multinode-175611
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-175611-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1031 18:15:56.696574   61454 out.go:296] Setting OutFile to fd 1 ...
	I1031 18:15:56.696723   61454 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 18:15:56.696738   61454 out.go:309] Setting ErrFile to fd 2...
	I1031 18:15:56.696746   61454 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1031 18:15:56.696842   61454 root.go:334] Updating PATH: /home/jenkins/minikube-integration/15242-42743/.minikube/bin
	I1031 18:15:56.696988   61454 out.go:303] Setting JSON to false
	I1031 18:15:56.697019   61454 mustload.go:65] Loading cluster: multinode-175611
	I1031 18:15:56.697128   61454 notify.go:220] Checking for updates...
	I1031 18:15:56.697466   61454 config.go:180] Loaded profile config "multinode-175611": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1031 18:15:56.697489   61454 status.go:255] checking status of multinode-175611 ...
	I1031 18:15:56.697970   61454 main.go:134] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1031 18:15:56.698015   61454 main.go:134] libmachine: Launching plugin server for driver kvm2
	I1031 18:15:56.713846   61454 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:39005
	I1031 18:15:56.714357   61454 main.go:134] libmachine: () Calling .GetVersion
	I1031 18:15:56.714892   61454 main.go:134] libmachine: Using API Version  1
	I1031 18:15:56.714915   61454 main.go:134] libmachine: () Calling .SetConfigRaw
	I1031 18:15:56.715240   61454 main.go:134] libmachine: () Calling .GetMachineName
	I1031 18:15:56.715415   61454 main.go:134] libmachine: (multinode-175611) Calling .GetState
	I1031 18:15:56.717002   61454 status.go:330] multinode-175611 host status = "Stopped" (err=<nil>)
	I1031 18:15:56.717028   61454 status.go:343] host is not running, skipping remaining checks
	I1031 18:15:56.717040   61454 status.go:257] multinode-175611 status: &{Name:multinode-175611 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1031 18:15:56.717069   61454 status.go:255] checking status of multinode-175611-m02 ...
	I1031 18:15:56.717308   61454 main.go:134] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1031 18:15:56.717340   61454 main.go:134] libmachine: Launching plugin server for driver kvm2
	I1031 18:15:56.732625   61454 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:41653
	I1031 18:15:56.732969   61454 main.go:134] libmachine: () Calling .GetVersion
	I1031 18:15:56.733367   61454 main.go:134] libmachine: Using API Version  1
	I1031 18:15:56.733392   61454 main.go:134] libmachine: () Calling .SetConfigRaw
	I1031 18:15:56.733664   61454 main.go:134] libmachine: () Calling .GetMachineName
	I1031 18:15:56.733831   61454 main.go:134] libmachine: (multinode-175611-m02) Calling .GetState
	I1031 18:15:56.735263   61454 status.go:330] multinode-175611-m02 host status = "Stopped" (err=<nil>)
	I1031 18:15:56.735278   61454 status.go:343] host is not running, skipping remaining checks
	I1031 18:15:56.735285   61454 status.go:257] multinode-175611-m02 status: &{Name:multinode-175611-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (5.68s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (614.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:352: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-175611 --wait=true -v=8 --alsologtostderr --driver=kvm2 
E1031 18:17:52.854158   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/addons-174026/client.crt: no such file or directory
E1031 18:18:21.940187   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/functional-174543/client.crt: no such file or directory
E1031 18:20:54.238069   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/ingress-addon-legacy-174921/client.crt: no such file or directory
E1031 18:20:55.900675   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/addons-174026/client.crt: no such file or directory
E1031 18:22:52.853480   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/addons-174026/client.crt: no such file or directory
E1031 18:23:21.939870   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/functional-174543/client.crt: no such file or directory
E1031 18:25:54.237607   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/ingress-addon-legacy-174921/client.crt: no such file or directory
multinode_test.go:352: (dbg) Done: out/minikube-linux-amd64 start -p multinode-175611 --wait=true -v=8 --alsologtostderr --driver=kvm2 : (10m13.619530574s)
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-175611 status --alsologtostderr
multinode_test.go:372: (dbg) Run:  kubectl get nodes
multinode_test.go:380: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (614.17s)

                                                
                                    
x
+
TestPreload (182.39s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-182617 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.24.4
E1031 18:26:24.987334   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/functional-174543/client.crt: no such file or directory
E1031 18:27:52.854031   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/addons-174026/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-182617 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.24.4: (1m58.01555722s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 ssh -p test-preload-182617 -- docker pull gcr.io/k8s-minikube/busybox
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 ssh -p test-preload-182617 -- docker pull gcr.io/k8s-minikube/busybox: (1.628255635s)
preload_test.go:67: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-182617 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --kubernetes-version=v1.24.6
E1031 18:28:21.940539   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/functional-174543/client.crt: no such file or directory
E1031 18:28:57.285789   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/ingress-addon-legacy-174921/client.crt: no such file or directory
preload_test.go:67: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-182617 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --kubernetes-version=v1.24.6: (1m1.321862401s)
preload_test.go:76: (dbg) Run:  out/minikube-linux-amd64 ssh -p test-preload-182617 -- docker images
helpers_test.go:175: Cleaning up "test-preload-182617" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-182617
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-182617: (1.142381085s)
--- PASS: TestPreload (182.39s)

                                                
                                    
x
+
TestScheduledStopUnix (126.08s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-182920 --memory=2048 --driver=kvm2 
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-182920 --memory=2048 --driver=kvm2 : (54.215542262s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-182920 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-182920 -n scheduled-stop-182920
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-182920 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-182920 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-182920 -n scheduled-stop-182920
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-182920
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-182920 --schedule 15s
E1031 18:30:54.238522   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/ingress-addon-legacy-174921/client.crt: no such file or directory
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-182920
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-182920: exit status 7 (85.652941ms)

                                                
                                                
-- stdout --
	scheduled-stop-182920
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-182920 -n scheduled-stop-182920
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-182920 -n scheduled-stop-182920: exit status 7 (87.369641ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-182920" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-182920
--- PASS: TestScheduledStopUnix (126.08s)

                                                
                                    
x
+
TestSkaffold (92.06s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe263871884 version
skaffold_test.go:63: skaffold version: v2.0.1
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p skaffold-183126 --memory=2600 --driver=kvm2 
skaffold_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p skaffold-183126 --memory=2600 --driver=kvm2 : (53.661124486s)
skaffold_test.go:86: copying out/minikube-linux-amd64 to /home/jenkins/workspace/KVM_Linux_integration/out/minikube
skaffold_test.go:110: (dbg) Run:  /tmp/skaffold.exe263871884 run --minikube-profile skaffold-183126 --kube-context skaffold-183126 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:110: (dbg) Done: /tmp/skaffold.exe263871884 run --minikube-profile skaffold-183126 --kube-context skaffold-183126 --status-check=true --port-forward=false --interactive=false: (26.661283167s)
skaffold_test.go:116: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:342: "leeroy-app-6966887cdf-ttj2b" [1a1e9175-4d4a-4c63-9e45-b16f8b22661b] Running
skaffold_test.go:116: (dbg) TestSkaffold: app=leeroy-app healthy within 5.013953546s
skaffold_test.go:119: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:342: "leeroy-web-5d9c4bc45c-rd6fc" [15049778-d88e-4b19-bf87-38d5ec32c9e2] Running
E1031 18:32:52.854018   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/addons-174026/client.crt: no such file or directory
skaffold_test.go:119: (dbg) TestSkaffold: app=leeroy-web healthy within 5.006007007s
helpers_test.go:175: Cleaning up "skaffold-183126" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p skaffold-183126
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p skaffold-183126: (1.076151556s)
--- PASS: TestSkaffold (92.06s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (202.26s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:127: (dbg) Run:  /tmp/minikube-v1.6.2.3788123564.exe start -p running-upgrade-183448 --memory=2200 --vm-driver=kvm2 

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:127: (dbg) Non-zero exit: /tmp/minikube-v1.6.2.3788123564.exe start -p running-upgrade-183448 --memory=2200 --vm-driver=kvm2 : exit status 70 (46.800554461s)

                                                
                                                
-- stdout --
	* [running-upgrade-183448] minikube v1.6.2 on Ubuntu 20.04
	  - MINIKUBE_LOCATION=15242
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15242-42743/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - KUBECONFIG=/tmp/legacy_kubeconfig3972674167
	* Selecting 'kvm2' driver from user configuration (alternates: [none])
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...

                                                
                                                
-- /stdout --
** stderr ** 
	
	! 'kvm2' driver reported an issue: /usr/bin/virsh domcapabilities --virttype kvm failed:
	error: failed to get emulator capabilities
	error: invalid argument: KVM is not supported by '/usr/bin/qemu-system-x86_64' on this host
	* Suggestion: Follow your Linux distribution instructions for configuring KVM
	* Documentation: https://minikube.sigs.k8s.io/docs/reference/drivers/kvm2/
	
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:127: (dbg) Run:  /tmp/minikube-v1.6.2.3788123564.exe start -p running-upgrade-183448 --memory=2200 --vm-driver=kvm2 

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:127: (dbg) Done: /tmp/minikube-v1.6.2.3788123564.exe start -p running-upgrade-183448 --memory=2200 --vm-driver=kvm2 : (1m32.976488341s)
version_upgrade_test.go:137: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-183448 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:137: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-183448 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 : (59.437759115s)
helpers_test.go:175: Cleaning up "running-upgrade-183448" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-183448
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-183448: (1.542732901s)
--- PASS: TestRunningBinaryUpgrade (202.26s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.37s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.37s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (230.38s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:190: (dbg) Run:  /tmp/minikube-v1.6.2.3825092665.exe start -p stopped-upgrade-183434 --memory=2200 --vm-driver=kvm2 

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:190: (dbg) Done: /tmp/minikube-v1.6.2.3825092665.exe start -p stopped-upgrade-183434 --memory=2200 --vm-driver=kvm2 : (2m36.156299072s)
version_upgrade_test.go:199: (dbg) Run:  /tmp/minikube-v1.6.2.3825092665.exe -p stopped-upgrade-183434 stop

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:199: (dbg) Done: /tmp/minikube-v1.6.2.3825092665.exe -p stopped-upgrade-183434 stop: (13.352131851s)
version_upgrade_test.go:205: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-183434 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 
E1031 18:37:35.901235   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/addons-174026/client.crt: no such file or directory
E1031 18:37:47.204976   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/skaffold-183126/client.crt: no such file or directory
E1031 18:37:47.210234   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/skaffold-183126/client.crt: no such file or directory
E1031 18:37:47.220522   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/skaffold-183126/client.crt: no such file or directory
E1031 18:37:47.240824   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/skaffold-183126/client.crt: no such file or directory
E1031 18:37:47.281149   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/skaffold-183126/client.crt: no such file or directory
E1031 18:37:47.361538   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/skaffold-183126/client.crt: no such file or directory
E1031 18:37:47.522097   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/skaffold-183126/client.crt: no such file or directory
E1031 18:37:47.842561   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/skaffold-183126/client.crt: no such file or directory
E1031 18:37:48.483521   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/skaffold-183126/client.crt: no such file or directory
E1031 18:37:49.763809   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/skaffold-183126/client.crt: no such file or directory
E1031 18:37:52.324268   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/skaffold-183126/client.crt: no such file or directory
E1031 18:37:52.853558   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/addons-174026/client.crt: no such file or directory
E1031 18:37:57.445487   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/skaffold-183126/client.crt: no such file or directory
E1031 18:38:07.686556   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/skaffold-183126/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:205: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-183434 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 : (1m0.873338984s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (230.38s)

                                                
                                    
x
+
TestPause/serial/Start (83.57s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-183550 --memory=2048 --install-addons=false --wait=all --driver=kvm2 

                                                
                                                
=== CONT  TestPause/serial/Start
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-183550 --memory=2048 --install-addons=false --wait=all --driver=kvm2 : (1m23.568821875s)
--- PASS: TestPause/serial/Start (83.57s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (74.18s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-183550 --alsologtostderr -v=1 --driver=kvm2 

                                                
                                                
=== CONT  TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-183550 --alsologtostderr -v=1 --driver=kvm2 : (1m14.161874812s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (74.18s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.14s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-183810 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-183810 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2 : exit status 14 (135.741239ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-183810] minikube v1.27.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15242
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15242-42743/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15242-42743/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.14s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (63.55s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-183810 --driver=kvm2 

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-183810 --driver=kvm2 : (1m3.241875799s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-183810 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (63.55s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.46s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:213: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-183434
version_upgrade_test.go:213: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-183434: (1.457736839s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.46s)

                                                
                                    
x
+
TestPause/serial/Pause (0.7s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-183550 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.70s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.3s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-183550 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-183550 --output=json --layout=cluster: exit status 2 (299.217412ms)

                                                
                                                
-- stdout --
	{"Name":"pause-183550","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 14 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.27.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-183550","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.30s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.69s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-183550 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.69s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.81s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-183550 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.81s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.07s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-183550 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-183550 --alsologtostderr -v=5: (1.068946602s)
--- PASS: TestPause/serial/DeletePaused (1.07s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.31s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (54.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-183810 --no-kubernetes --driver=kvm2 

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-183810 --no-kubernetes --driver=kvm2 : (52.65031218s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-183810 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-183810 status -o json: exit status 2 (277.714227ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-183810","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-183810
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-183810: (1.34116214s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (54.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (42.53s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-183810 --no-kubernetes --driver=kvm2 

                                                
                                                
=== CONT  TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-183810 --no-kubernetes --driver=kvm2 : (42.529197367s)
--- PASS: TestNoKubernetes/serial/Start (42.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (89.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p auto-183258 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=kvm2 
E1031 18:40:46.693814   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/gvisor-183258/client.crt: no such file or directory
E1031 18:40:46.699144   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/gvisor-183258/client.crt: no such file or directory
E1031 18:40:46.709456   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/gvisor-183258/client.crt: no such file or directory
E1031 18:40:46.729721   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/gvisor-183258/client.crt: no such file or directory
E1031 18:40:46.770067   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/gvisor-183258/client.crt: no such file or directory
E1031 18:40:46.850409   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/gvisor-183258/client.crt: no such file or directory
E1031 18:40:47.010830   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/gvisor-183258/client.crt: no such file or directory
E1031 18:40:47.331526   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/gvisor-183258/client.crt: no such file or directory
E1031 18:40:47.972458   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/gvisor-183258/client.crt: no such file or directory
E1031 18:40:49.252828   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/gvisor-183258/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/Start
net_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p auto-183258 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=kvm2 : (1m29.398526024s)
--- PASS: TestNetworkPlugins/group/auto/Start (89.40s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.25s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-183810 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-183810 "sudo systemctl is-active --quiet service kubelet": exit status 1 (246.958873ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.25s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
E1031 18:40:51.813574   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/gvisor-183258/client.crt: no such file or directory
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-183810
E1031 18:40:54.237941   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/ingress-addon-legacy-174921/client.crt: no such file or directory
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-183810: (2.222771359s)
--- PASS: TestNoKubernetes/serial/Stop (2.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (40.73s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-183810 --driver=kvm2 
E1031 18:40:56.933852   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/gvisor-183258/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-183810 --driver=kvm2 : (40.731650258s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (40.73s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (125.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-183258 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=kvm2 
E1031 18:41:07.174181   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/gvisor-183258/client.crt: no such file or directory
E1031 18:41:27.654582   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/gvisor-183258/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/Start
net_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-183258 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=kvm2 : (2m5.162458593s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (125.16s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-183810 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-183810 "sudo systemctl is-active --quiet service kubelet": exit status 1 (260.16347ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Start (152s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Start
net_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p cilium-183258 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=kvm2 

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/Start
net_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p cilium-183258 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=kvm2 : (2m31.99876889s)
--- PASS: TestNetworkPlugins/group/cilium/Start (152.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-183258 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (16.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context auto-183258 replace --force -f testdata/netcat-deployment.yaml
E1031 18:42:08.615406   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/gvisor-183258/client.crt: no such file or directory
net_test.go:138: (dbg) Done: kubectl --context auto-183258 replace --force -f testdata/netcat-deployment.yaml: (1.781272274s)
net_test.go:152: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-5788d667bd-d62sw" [2583889a-999a-4bde-a523-cde2c16349ad] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-5788d667bd-d62sw" [2583889a-999a-4bde-a523-cde2c16349ad] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 13.010231144s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (16.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:169: (dbg) Run:  kubectl --context auto-183258 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:188: (dbg) Run:  kubectl --context auto-183258 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (5.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:238: (dbg) Run:  kubectl --context auto-183258 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
net_test.go:238: (dbg) Non-zero exit: kubectl --context auto-183258 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.178912349s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
--- PASS: TestNetworkPlugins/group/auto/HairPin (5.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (337.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p calico-183258 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=kvm2 
E1031 18:42:47.204943   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/skaffold-183126/client.crt: no such file or directory
E1031 18:42:52.854189   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/addons-174026/client.crt: no such file or directory
E1031 18:43:04.988392   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/functional-174543/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/Start
net_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p calico-183258 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=kvm2 : (5m37.303268356s)
--- PASS: TestNetworkPlugins/group/calico/Start (337.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:109: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:342: "kindnet-n5v66" [bb45f150-7a53-49a7-b365-3358d062ef3d] Running
net_test.go:109: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.018787656s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-183258 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (13.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context kindnet-183258 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-5788d667bd-xb76j" [d60c751a-e23b-4e46-b085-3d6a3885addb] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1031 18:43:14.887948   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/skaffold-183126/client.crt: no such file or directory
E1031 18:43:21.939875   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/functional-174543/client.crt: no such file or directory
helpers_test.go:342: "netcat-5788d667bd-xb76j" [d60c751a-e23b-4e46-b085-3d6a3885addb] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 13.015300379s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (13.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:169: (dbg) Run:  kubectl --context kindnet-183258 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:188: (dbg) Run:  kubectl --context kindnet-183258 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:238: (dbg) Run:  kubectl --context kindnet-183258 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (81.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-183258 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=testdata/kube-flannel.yaml --driver=kvm2 
E1031 18:43:30.536620   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/gvisor-183258/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/custom-flannel/Start
net_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-183258 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=testdata/kube-flannel.yaml --driver=kvm2 : (1m21.643093586s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (81.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (87.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p false-183258 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=false --driver=kvm2 

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/Start
net_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p false-183258 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=false --driver=kvm2 : (1m27.188605572s)
--- PASS: TestNetworkPlugins/group/false/Start (87.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/ControllerPod
net_test.go:109: (dbg) TestNetworkPlugins/group/cilium/ControllerPod: waiting 10m0s for pods matching "k8s-app=cilium" in namespace "kube-system" ...
helpers_test.go:342: "cilium-68rkl" [f63a1e9b-6037-4b89-8cfa-563fd791e7b7] Running
net_test.go:109: (dbg) TestNetworkPlugins/group/cilium/ControllerPod: k8s-app=cilium healthy within 5.023049603s
--- PASS: TestNetworkPlugins/group/cilium/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-linux-amd64 ssh -p cilium-183258 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/cilium/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/NetCatPod (14.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context cilium-183258 replace --force -f testdata/netcat-deployment.yaml
net_test.go:138: (dbg) Done: kubectl --context cilium-183258 replace --force -f testdata/netcat-deployment.yaml: (1.067798739s)
net_test.go:152: (dbg) TestNetworkPlugins/group/cilium/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-5788d667bd-skkh5" [bbc9a875-7447-4df1-a171-4cc8a708465e] Pending
helpers_test.go:342: "netcat-5788d667bd-skkh5" [bbc9a875-7447-4df1-a171-4cc8a708465e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-5788d667bd-skkh5" [bbc9a875-7447-4df1-a171-4cc8a708465e] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/cilium/NetCatPod: app=netcat healthy within 13.088173461s
--- PASS: TestNetworkPlugins/group/cilium/NetCatPod (14.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/DNS (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/DNS
net_test.go:169: (dbg) Run:  kubectl --context cilium-183258 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/cilium/DNS (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Localhost
net_test.go:188: (dbg) Run:  kubectl --context cilium-183258 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/cilium/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/HairPin (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/HairPin
net_test.go:238: (dbg) Run:  kubectl --context cilium-183258 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/cilium/HairPin (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (111.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-183258 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=kvm2 

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-183258 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=kvm2 : (1m51.775467203s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (111.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-183258 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (12.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context custom-flannel-183258 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-5788d667bd-8t9zd" [2e9a213c-bb07-4ebc-8130-53c9836d38ca] Pending
helpers_test.go:342: "netcat-5788d667bd-8t9zd" [2e9a213c-bb07-4ebc-8130-53c9836d38ca] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-5788d667bd-8t9zd" [2e9a213c-bb07-4ebc-8130-53c9836d38ca] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 12.012044773s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (12.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:169: (dbg) Run:  kubectl --context custom-flannel-183258 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:188: (dbg) Run:  kubectl --context custom-flannel-183258 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:238: (dbg) Run:  kubectl --context custom-flannel-183258 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (91.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-183258 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=flannel --driver=kvm2 

                                                
                                                
=== CONT  TestNetworkPlugins/group/flannel/Start
net_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p flannel-183258 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=flannel --driver=kvm2 : (1m31.088273979s)
--- PASS: TestNetworkPlugins/group/flannel/Start (91.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-linux-amd64 ssh -p false-183258 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (13.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context false-183258 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-5788d667bd-bf87c" [125a72e6-c1c4-4536-a64a-3d9f7e0d723c] Pending
helpers_test.go:342: "netcat-5788d667bd-bf87c" [125a72e6-c1c4-4536-a64a-3d9f7e0d723c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-5788d667bd-bf87c" [125a72e6-c1c4-4536-a64a-3d9f7e0d723c] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 13.026461471s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (13.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:169: (dbg) Run:  kubectl --context false-183258 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:188: (dbg) Run:  kubectl --context false-183258 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (5.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:238: (dbg) Run:  kubectl --context false-183258 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
net_test.go:238: (dbg) Non-zero exit: kubectl --context false-183258 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.161921471s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
--- PASS: TestNetworkPlugins/group/false/HairPin (5.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (78.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-183258 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=kvm2 
E1031 18:45:37.286527   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/ingress-addon-legacy-174921/client.crt: no such file or directory
E1031 18:45:46.693467   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/gvisor-183258/client.crt: no such file or directory
E1031 18:45:54.237829   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/ingress-addon-legacy-174921/client.crt: no such file or directory
E1031 18:46:14.377642   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/gvisor-183258/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/Start
net_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p bridge-183258 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=kvm2 : (1m18.052006797s)
--- PASS: TestNetworkPlugins/group/bridge/Start (78.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-183258 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context enable-default-cni-183258 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-5788d667bd-2f9vz" [699bdc3d-1835-409a-8e73-328ce3717c27] Pending
helpers_test.go:342: "netcat-5788d667bd-2f9vz" [699bdc3d-1835-409a-8e73-328ce3717c27] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-5788d667bd-2f9vz" [699bdc3d-1835-409a-8e73-328ce3717c27] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 13.012850804s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-183258 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:188: (dbg) Run:  kubectl --context enable-default-cni-183258 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:238: (dbg) Run:  kubectl --context enable-default-cni-183258 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:109: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-system" ...
helpers_test.go:342: "kube-flannel-ds-amd64-d44vf" [25b04103-b69f-47dd-8abf-85b39ae49068] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:109: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.018773505s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (79.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p kubenet-183258 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --network-plugin=kubenet --driver=kvm2 

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/Start
net_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p kubenet-183258 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --network-plugin=kubenet --driver=kvm2 : (1m19.913998334s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (79.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-183258 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (14.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context flannel-183258 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-5788d667bd-msk2p" [6a77ae86-6be6-4ebf-8777-d2676519a9d4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-5788d667bd-msk2p" [6a77ae86-6be6-4ebf-8777-d2676519a9d4] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:152: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 14.016140142s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (14.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-183258 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (12.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context bridge-183258 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-5788d667bd-24rjm" [1fd891ba-04f5-4f11-a704-f9b380f83d4e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/NetCatPod
helpers_test.go:342: "netcat-5788d667bd-24rjm" [1fd891ba-04f5-4f11-a704-f9b380f83d4e] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 12.008268611s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (12.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:169: (dbg) Run:  kubectl --context flannel-183258 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:188: (dbg) Run:  kubectl --context flannel-183258 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:238: (dbg) Run:  kubectl --context flannel-183258 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (151.69s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-184658 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.16.0

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-184658 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.16.0: (2m31.691062183s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (151.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Run:  kubectl --context bridge-183258 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:188: (dbg) Run:  kubectl --context bridge-183258 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:238: (dbg) Run:  kubectl --context bridge-183258 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (155.91s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-184708 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.25.3
E1031 18:47:09.766089   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/auto-183258/client.crt: no such file or directory
E1031 18:47:09.771507   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/auto-183258/client.crt: no such file or directory
E1031 18:47:09.781833   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/auto-183258/client.crt: no such file or directory
E1031 18:47:09.802162   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/auto-183258/client.crt: no such file or directory
E1031 18:47:09.842445   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/auto-183258/client.crt: no such file or directory
E1031 18:47:09.922779   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/auto-183258/client.crt: no such file or directory
E1031 18:47:10.083968   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/auto-183258/client.crt: no such file or directory
E1031 18:47:10.404677   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/auto-183258/client.crt: no such file or directory
E1031 18:47:11.045537   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/auto-183258/client.crt: no such file or directory
E1031 18:47:12.325930   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/auto-183258/client.crt: no such file or directory
E1031 18:47:14.886723   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/auto-183258/client.crt: no such file or directory
E1031 18:47:20.007231   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/auto-183258/client.crt: no such file or directory
E1031 18:47:30.247759   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/auto-183258/client.crt: no such file or directory
E1031 18:47:47.205052   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/skaffold-183126/client.crt: no such file or directory
E1031 18:47:50.728344   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/auto-183258/client.crt: no such file or directory
E1031 18:47:52.853909   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/addons-174026/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-184708 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.25.3: (2m35.907435828s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (155.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-linux-amd64 ssh -p kubenet-183258 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (14.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context kubenet-183258 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-5788d667bd-ztjhd" [c531e8ce-f527-4732-807c-7f5e0c74c46e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-5788d667bd-ztjhd" [c531e8ce-f527-4732-807c-7f5e0c74c46e] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:152: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 14.010114009s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (14.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:109: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:342: "calico-node-glz92" [b86803d4-7444-4b0b-a4b1-844e580a0263] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
E1031 18:48:08.392709   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/kindnet-183258/client.crt: no such file or directory
E1031 18:48:08.398034   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/kindnet-183258/client.crt: no such file or directory
E1031 18:48:08.408341   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/kindnet-183258/client.crt: no such file or directory
E1031 18:48:08.428693   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/kindnet-183258/client.crt: no such file or directory
E1031 18:48:08.468838   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/kindnet-183258/client.crt: no such file or directory
E1031 18:48:08.549202   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/kindnet-183258/client.crt: no such file or directory
E1031 18:48:08.709650   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/kindnet-183258/client.crt: no such file or directory
E1031 18:48:09.030045   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/kindnet-183258/client.crt: no such file or directory
E1031 18:48:09.671288   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/kindnet-183258/client.crt: no such file or directory
E1031 18:48:10.951482   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/kindnet-183258/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/ControllerPod
net_test.go:109: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.020965945s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:169: (dbg) Run:  kubectl --context kubenet-183258 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:188: (dbg) Run:  kubectl --context kubenet-183258 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-183258 "pgrep -a kubelet"
E1031 18:48:13.512705   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/kindnet-183258/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (16.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context calico-183258 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-5788d667bd-d5ztc" [84dc09bc-12c1-4bde-8724-9e9c41f9732a] Pending
helpers_test.go:342: "netcat-5788d667bd-d5ztc" [84dc09bc-12c1-4bde-8724-9e9c41f9732a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/NetCatPod
helpers_test.go:342: "netcat-5788d667bd-d5ztc" [84dc09bc-12c1-4bde-8724-9e9c41f9732a] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/NetCatPod
net_test.go:152: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 16.016722644s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (16.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:169: (dbg) Run:  kubectl --context calico-183258 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:188: (dbg) Run:  kubectl --context calico-183258 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:238: (dbg) Run:  kubectl --context calico-183258 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (81.75s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-184831 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.25.3

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-184831 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.25.3: (1m21.753486885s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (81.75s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (79.63s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-184915 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.25.3
E1031 18:49:19.145401   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/cilium-183258/client.crt: no such file or directory
E1031 18:49:29.385806   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/cilium-183258/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-184915 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.25.3: (1m19.6297852s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (79.63s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.42s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-184658 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [5fd60088-a256-4a2d-a333-c5ed2d801b52] Pending
E1031 18:49:30.315757   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/kindnet-183258/client.crt: no such file or directory
helpers_test.go:342: "busybox" [5fd60088-a256-4a2d-a333-c5ed2d801b52] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [5fd60088-a256-4a2d-a333-c5ed2d801b52] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.031667644s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-184658 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.42s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.76s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-184658 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-184658 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.76s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (4.15s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-184658 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-184658 --alsologtostderr -v=3: (4.149641709s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (4.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-184658 -n old-k8s-version-184658
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-184658 -n old-k8s-version-184658: exit status 7 (120.694846ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-184658 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (416.32s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-184658 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.16.0

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-184658 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.16.0: (6m56.010320865s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-184658 -n old-k8s-version-184658
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (416.32s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (13.47s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-184708 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [31859e19-7dc0-4557-b65c-0431c8f4ecca] Pending
helpers_test.go:342: "busybox" [31859e19-7dc0-4557-b65c-0431c8f4ecca] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1031 18:49:49.866871   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/cilium-183258/client.crt: no such file or directory
E1031 18:49:51.223744   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/custom-flannel-183258/client.crt: no such file or directory
E1031 18:49:51.229109   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/custom-flannel-183258/client.crt: no such file or directory
E1031 18:49:51.239422   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/custom-flannel-183258/client.crt: no such file or directory
E1031 18:49:51.259770   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/custom-flannel-183258/client.crt: no such file or directory
E1031 18:49:51.300331   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/custom-flannel-183258/client.crt: no such file or directory
E1031 18:49:51.380686   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/custom-flannel-183258/client.crt: no such file or directory
E1031 18:49:51.540871   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/custom-flannel-183258/client.crt: no such file or directory
E1031 18:49:51.861385   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/custom-flannel-183258/client.crt: no such file or directory
helpers_test.go:342: "busybox" [31859e19-7dc0-4557-b65c-0431c8f4ecca] Running
E1031 18:49:52.502179   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/custom-flannel-183258/client.crt: no such file or directory
E1031 18:49:53.609452   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/auto-183258/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 13.024070673s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-184708 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (13.47s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.47s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-184831 create -f testdata/busybox.yaml
E1031 18:49:53.782358   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/custom-flannel-183258/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [086f0047-245e-4202-b00d-219c0a2684b1] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1031 18:49:56.343308   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/custom-flannel-183258/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/DeployApp
helpers_test.go:342: "busybox" [086f0047-245e-4202-b00d-219c0a2684b1] Running

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.043883382s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-184831 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.47s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.89s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-184708 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-184708 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.89s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (4.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-184708 --alsologtostderr -v=3
E1031 18:50:01.463524   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/custom-flannel-183258/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-184708 --alsologtostderr -v=3: (4.130374133s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (4.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-184708 -n no-preload-184708

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-184708 -n no-preload-184708: exit status 7 (135.190422ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-184708 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.31s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.16s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-184831 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-184831 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.011436837s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-184831 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (329.32s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-184708 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.25.3

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-184708 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.25.3: (5m28.969013807s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-184708 -n no-preload-184708
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (329.32s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (13.16s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-184831 --alsologtostderr -v=3
E1031 18:50:11.704157   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/custom-flannel-183258/client.crt: no such file or directory
E1031 18:50:15.268764   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/false-183258/client.crt: no such file or directory
E1031 18:50:15.274034   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/false-183258/client.crt: no such file or directory
E1031 18:50:15.284325   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/false-183258/client.crt: no such file or directory
E1031 18:50:15.304614   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/false-183258/client.crt: no such file or directory
E1031 18:50:15.345033   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/false-183258/client.crt: no such file or directory
E1031 18:50:15.425463   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/false-183258/client.crt: no such file or directory
E1031 18:50:15.586462   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/false-183258/client.crt: no such file or directory
E1031 18:50:15.906982   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/false-183258/client.crt: no such file or directory
E1031 18:50:16.548164   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/false-183258/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-184831 --alsologtostderr -v=3: (13.155817134s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (13.16s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-184831 -n embed-certs-184831
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-184831 -n embed-certs-184831: exit status 7 (98.247833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-184831 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (338.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-184831 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.25.3
E1031 18:50:17.829147   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/false-183258/client.crt: no such file or directory
E1031 18:50:20.389609   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/false-183258/client.crt: no such file or directory
E1031 18:50:25.510580   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/false-183258/client.crt: no such file or directory
E1031 18:50:30.827916   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/cilium-183258/client.crt: no such file or directory
E1031 18:50:32.184423   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/custom-flannel-183258/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-184831 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.25.3: (5m37.712126252s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-184831 -n embed-certs-184831
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (338.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.49s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-184915 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [c7f939cc-ab85-4ae9-b7ac-df74b76e2073] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1031 18:50:35.751135   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/false-183258/client.crt: no such file or directory
helpers_test.go:342: "busybox" [c7f939cc-ab85-4ae9-b7ac-df74b76e2073] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.033954141s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-184915 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.49s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.74s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-184915 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-184915 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.74s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (13.15s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-184915 --alsologtostderr -v=3
E1031 18:50:46.693702   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/gvisor-183258/client.crt: no such file or directory
E1031 18:50:52.236812   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/kindnet-183258/client.crt: no such file or directory
E1031 18:50:54.237429   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/ingress-addon-legacy-174921/client.crt: no such file or directory
E1031 18:50:56.232010   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/false-183258/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-184915 --alsologtostderr -v=3: (13.151437558s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (13.15s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-184915 -n default-k8s-diff-port-184915
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-184915 -n default-k8s-diff-port-184915: exit status 7 (103.018194ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-184915 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (330.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-184915 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.25.3
E1031 18:51:13.145670   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/custom-flannel-183258/client.crt: no such file or directory
E1031 18:51:22.653553   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/enable-default-cni-183258/client.crt: no such file or directory
E1031 18:51:22.658907   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/enable-default-cni-183258/client.crt: no such file or directory
E1031 18:51:22.669206   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/enable-default-cni-183258/client.crt: no such file or directory
E1031 18:51:22.690209   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/enable-default-cni-183258/client.crt: no such file or directory
E1031 18:51:22.730358   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/enable-default-cni-183258/client.crt: no such file or directory
E1031 18:51:22.811072   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/enable-default-cni-183258/client.crt: no such file or directory
E1031 18:51:22.972019   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/enable-default-cni-183258/client.crt: no such file or directory
E1031 18:51:23.293009   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/enable-default-cni-183258/client.crt: no such file or directory
E1031 18:51:23.933270   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/enable-default-cni-183258/client.crt: no such file or directory
E1031 18:51:25.213823   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/enable-default-cni-183258/client.crt: no such file or directory
E1031 18:51:27.774685   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/enable-default-cni-183258/client.crt: no such file or directory
E1031 18:51:32.895487   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/enable-default-cni-183258/client.crt: no such file or directory
E1031 18:51:36.567308   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/flannel-183258/client.crt: no such file or directory
E1031 18:51:36.572612   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/flannel-183258/client.crt: no such file or directory
E1031 18:51:36.582863   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/flannel-183258/client.crt: no such file or directory
E1031 18:51:36.603124   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/flannel-183258/client.crt: no such file or directory
E1031 18:51:36.643405   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/flannel-183258/client.crt: no such file or directory
E1031 18:51:36.723836   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/flannel-183258/client.crt: no such file or directory
E1031 18:51:36.884679   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/flannel-183258/client.crt: no such file or directory
E1031 18:51:37.192992   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/false-183258/client.crt: no such file or directory
E1031 18:51:37.205148   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/flannel-183258/client.crt: no such file or directory
E1031 18:51:37.845598   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/flannel-183258/client.crt: no such file or directory
E1031 18:51:39.126518   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/flannel-183258/client.crt: no such file or directory
E1031 18:51:41.687452   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/flannel-183258/client.crt: no such file or directory
E1031 18:51:43.135798   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/enable-default-cni-183258/client.crt: no such file or directory
E1031 18:51:46.808053   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/flannel-183258/client.crt: no such file or directory
E1031 18:51:52.748557   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/cilium-183258/client.crt: no such file or directory
E1031 18:51:53.694602   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/bridge-183258/client.crt: no such file or directory
E1031 18:51:53.699887   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/bridge-183258/client.crt: no such file or directory
E1031 18:51:53.710236   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/bridge-183258/client.crt: no such file or directory
E1031 18:51:53.730584   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/bridge-183258/client.crt: no such file or directory
E1031 18:51:53.770915   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/bridge-183258/client.crt: no such file or directory
E1031 18:51:53.851289   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/bridge-183258/client.crt: no such file or directory
E1031 18:51:54.011737   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/bridge-183258/client.crt: no such file or directory
E1031 18:51:54.332174   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/bridge-183258/client.crt: no such file or directory
E1031 18:51:54.972765   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/bridge-183258/client.crt: no such file or directory
E1031 18:51:56.252998   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/bridge-183258/client.crt: no such file or directory
E1031 18:51:57.048725   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/flannel-183258/client.crt: no such file or directory
E1031 18:51:58.813728   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/bridge-183258/client.crt: no such file or directory
E1031 18:52:03.617016   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/enable-default-cni-183258/client.crt: no such file or directory
E1031 18:52:03.934650   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/bridge-183258/client.crt: no such file or directory
E1031 18:52:09.766706   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/auto-183258/client.crt: no such file or directory
E1031 18:52:14.175260   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/bridge-183258/client.crt: no such file or directory
E1031 18:52:17.529190   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/flannel-183258/client.crt: no such file or directory
E1031 18:52:34.655963   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/bridge-183258/client.crt: no such file or directory
E1031 18:52:35.066324   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/custom-flannel-183258/client.crt: no such file or directory
E1031 18:52:37.450684   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/auto-183258/client.crt: no such file or directory
E1031 18:52:44.577199   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/enable-default-cni-183258/client.crt: no such file or directory
E1031 18:52:47.205650   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/skaffold-183126/client.crt: no such file or directory
E1031 18:52:52.853523   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/addons-174026/client.crt: no such file or directory
E1031 18:52:57.874853   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/kubenet-183258/client.crt: no such file or directory
E1031 18:52:57.880141   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/kubenet-183258/client.crt: no such file or directory
E1031 18:52:57.890435   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/kubenet-183258/client.crt: no such file or directory
E1031 18:52:57.910745   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/kubenet-183258/client.crt: no such file or directory
E1031 18:52:57.951091   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/kubenet-183258/client.crt: no such file or directory
E1031 18:52:58.031502   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/kubenet-183258/client.crt: no such file or directory
E1031 18:52:58.191960   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/kubenet-183258/client.crt: no such file or directory
E1031 18:52:58.490258   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/flannel-183258/client.crt: no such file or directory
E1031 18:52:58.512756   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/kubenet-183258/client.crt: no such file or directory
E1031 18:52:59.113459   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/false-183258/client.crt: no such file or directory
E1031 18:52:59.153689   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/kubenet-183258/client.crt: no such file or directory
E1031 18:53:00.434877   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/kubenet-183258/client.crt: no such file or directory
E1031 18:53:02.995391   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/kubenet-183258/client.crt: no such file or directory
E1031 18:53:08.116518   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/kubenet-183258/client.crt: no such file or directory
E1031 18:53:08.347890   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/calico-183258/client.crt: no such file or directory
E1031 18:53:08.353215   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/calico-183258/client.crt: no such file or directory
E1031 18:53:08.363501   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/calico-183258/client.crt: no such file or directory
E1031 18:53:08.383836   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/calico-183258/client.crt: no such file or directory
E1031 18:53:08.392105   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/kindnet-183258/client.crt: no such file or directory
E1031 18:53:08.424388   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/calico-183258/client.crt: no such file or directory
E1031 18:53:08.504760   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/calico-183258/client.crt: no such file or directory
E1031 18:53:08.665236   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/calico-183258/client.crt: no such file or directory
E1031 18:53:08.986283   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/calico-183258/client.crt: no such file or directory
E1031 18:53:09.627272   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/calico-183258/client.crt: no such file or directory
E1031 18:53:10.907435   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/calico-183258/client.crt: no such file or directory
E1031 18:53:13.468010   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/calico-183258/client.crt: no such file or directory
E1031 18:53:15.616963   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/bridge-183258/client.crt: no such file or directory
E1031 18:53:18.356972   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/kubenet-183258/client.crt: no such file or directory
E1031 18:53:18.588697   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/calico-183258/client.crt: no such file or directory
E1031 18:53:21.940461   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/functional-174543/client.crt: no such file or directory
E1031 18:53:28.829798   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/calico-183258/client.crt: no such file or directory
E1031 18:53:36.077817   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/kindnet-183258/client.crt: no such file or directory
E1031 18:53:38.837731   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/kubenet-183258/client.crt: no such file or directory
E1031 18:53:49.310973   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/calico-183258/client.crt: no such file or directory
E1031 18:54:06.497754   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/enable-default-cni-183258/client.crt: no such file or directory
E1031 18:54:08.904826   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/cilium-183258/client.crt: no such file or directory
E1031 18:54:10.248958   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/skaffold-183126/client.crt: no such file or directory
E1031 18:54:15.902148   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/addons-174026/client.crt: no such file or directory
E1031 18:54:19.798814   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/kubenet-183258/client.crt: no such file or directory
E1031 18:54:20.410635   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/flannel-183258/client.crt: no such file or directory
E1031 18:54:30.271502   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/calico-183258/client.crt: no such file or directory
E1031 18:54:36.589056   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/cilium-183258/client.crt: no such file or directory
E1031 18:54:37.537186   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/bridge-183258/client.crt: no such file or directory
E1031 18:54:51.224340   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/custom-flannel-183258/client.crt: no such file or directory
E1031 18:55:15.268622   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/false-183258/client.crt: no such file or directory
E1031 18:55:18.907026   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/custom-flannel-183258/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-184915 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.25.3: (5m29.758972077s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-184915 -n default-k8s-diff-port-184915
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (330.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (20.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-57bbdc5f89-68qqx" [ac2c2b85-e8ea-4b64-9135-7ea63675196a] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E1031 18:55:41.719677   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/kubenet-183258/client.crt: no such file or directory
E1031 18:55:42.954082   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/false-183258/client.crt: no such file or directory
E1031 18:55:46.693087   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/gvisor-183258/client.crt: no such file or directory
helpers_test.go:342: "kubernetes-dashboard-57bbdc5f89-68qqx" [ac2c2b85-e8ea-4b64-9135-7ea63675196a] Running
E1031 18:55:52.191719   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/calico-183258/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 20.020351105s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (20.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-57bbdc5f89-68qqx" [ac2c2b85-e8ea-4b64-9135-7ea63675196a] Running
E1031 18:55:54.237694   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/ingress-addon-legacy-174921/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.011581132s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-184708 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (18.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-57bbdc5f89-rkm8n" [1a860b6b-bba1-48b2-a10a-f78d53589aa8] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
helpers_test.go:342: "kubernetes-dashboard-57bbdc5f89-rkm8n" [1a860b6b-bba1-48b2-a10a-f78d53589aa8] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 18.019772791s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (18.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p no-preload-184708 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-184708 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-184708 -n no-preload-184708
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-184708 -n no-preload-184708: exit status 2 (333.051968ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-184708 -n no-preload-184708
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-184708 -n no-preload-184708: exit status 2 (297.694667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-184708 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-184708 -n no-preload-184708
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-184708 -n no-preload-184708
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (75.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-185602 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=kvm2  --kubernetes-version=v1.25.3

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-185602 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=kvm2  --kubernetes-version=v1.25.3: (1m15.260697917s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (75.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-57bbdc5f89-rkm8n" [1a860b6b-bba1-48b2-a10a-f78d53589aa8] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.008079079s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-184831 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p embed-certs-184831 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.79s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-184831 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-184831 -n embed-certs-184831
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-184831 -n embed-certs-184831: exit status 2 (265.199817ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-184831 -n embed-certs-184831
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-184831 -n embed-certs-184831: exit status 2 (266.977883ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-184831 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-184831 -n embed-certs-184831
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-184831 -n embed-certs-184831
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.79s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (13.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-57bbdc5f89-qzw9g" [d918b8e7-8996-4c03-8c77-697cb88ece48] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E1031 18:56:36.568258   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/flannel-183258/client.crt: no such file or directory
helpers_test.go:342: "kubernetes-dashboard-57bbdc5f89-qzw9g" [d918b8e7-8996-4c03-8c77-697cb88ece48] Running

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 13.015566282s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (13.02s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-59d54d6bc8-jnr56" [94d563ca-95a8-4f73-b18c-6e6c8b841847] Running

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.014392979s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-57bbdc5f89-qzw9g" [d918b8e7-8996-4c03-8c77-697cb88ece48] Running

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.009023321s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-184915 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-59d54d6bc8-jnr56" [94d563ca-95a8-4f73-b18c-6e6c8b841847] Running

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.007021181s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-184658 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p default-k8s-diff-port-184915 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.86s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-184915 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-184915 -n default-k8s-diff-port-184915
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-184915 -n default-k8s-diff-port-184915: exit status 2 (270.625526ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-184915 -n default-k8s-diff-port-184915
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-184915 -n default-k8s-diff-port-184915: exit status 2 (251.306174ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-184915 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-184915 -n default-k8s-diff-port-184915
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-184915 -n default-k8s-diff-port-184915
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.86s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p old-k8s-version-184658 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.37s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-184658 --alsologtostderr -v=1
E1031 18:56:50.338650   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/enable-default-cni-183258/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-184658 -n old-k8s-version-184658
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-184658 -n old-k8s-version-184658: exit status 2 (313.952431ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-184658 -n old-k8s-version-184658

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-184658 -n old-k8s-version-184658: exit status 2 (296.392223ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-184658 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-amd64 unpause -p old-k8s-version-184658 --alsologtostderr -v=1: (1.226578564s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-184658 -n old-k8s-version-184658
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-184658 -n old-k8s-version-184658
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.37s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.83s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-185602 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.83s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (13.14s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-185602 --alsologtostderr -v=3
E1031 18:57:21.377858   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/bridge-183258/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-185602 --alsologtostderr -v=3: (13.140678611s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (13.14s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-185602 -n newest-cni-185602
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-185602 -n newest-cni-185602: exit status 7 (86.932649ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-185602 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (38.72s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-185602 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=kvm2  --kubernetes-version=v1.25.3
E1031 18:57:47.205576   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/skaffold-183126/client.crt: no such file or directory
E1031 18:57:52.853836   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/addons-174026/client.crt: no such file or directory
E1031 18:57:57.874684   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/kubenet-183258/client.crt: no such file or directory
E1031 18:58:08.347646   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/calico-183258/client.crt: no such file or directory
E1031 18:58:08.391972   49529 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15242-42743/.minikube/profiles/kindnet-183258/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-185602 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=kvm2  --kubernetes-version=v1.25.3: (38.45703536s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-185602 -n newest-cni-185602
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (38.72s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p newest-cni-185602 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.34s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-185602 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-185602 -n newest-cni-185602
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-185602 -n newest-cni-185602: exit status 2 (252.64431ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-185602 -n newest-cni-185602
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-185602 -n newest-cni-185602: exit status 2 (261.023347ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-185602 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-185602 -n newest-cni-185602
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-185602 -n newest-cni-185602
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.34s)

                                                
                                    

Test skip (26/306)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:156: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.25.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.25.3/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.25.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.25.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.25.3/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.25.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.25.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.25.3/kubectl
aaa_download_only_test.go:156: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.25.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:214: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:451: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:543: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:88: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:88: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:88: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:88: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:88: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:88: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:88: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Only test none driver.
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:291: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-184915" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-184915
--- SKIP: TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                    
Copied to clipboard