Test Report: KVM_Linux_containerd 14079

                    
                      798c4e8fed290cfa318a9fb994a7c6f555db39c1:2022-06-01:24216
                    
                

Test fail (2/287)

Order failed test Duration
209 TestStoppedBinaryUpgrade/Upgrade 271.46
222 TestPause/serial/SecondStartNoReconfiguration 43.13
x
+
TestStoppedBinaryUpgrade/Upgrade (271.46s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:190: (dbg) Run:  /tmp/minikube-v1.16.0.4033024435.exe start -p stopped-upgrade-20220601110426-7337 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd
E0601 11:04:31.580762    7337 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/functional-20220601102657-7337/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:190: (dbg) Done: /tmp/minikube-v1.16.0.4033024435.exe start -p stopped-upgrade-20220601110426-7337 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd: (2m52.512936208s)
version_upgrade_test.go:199: (dbg) Run:  /tmp/minikube-v1.16.0.4033024435.exe -p stopped-upgrade-20220601110426-7337 stop
version_upgrade_test.go:199: (dbg) Done: /tmp/minikube-v1.16.0.4033024435.exe -p stopped-upgrade-20220601110426-7337 stop: (2.349779645s)
version_upgrade_test.go:205: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-20220601110426-7337 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
E0601 11:07:34.623730    7337 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/functional-20220601102657-7337/client.crt: no such file or directory
E0601 11:07:46.265556    7337 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/addons-20220601102016-7337/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p stopped-upgrade-20220601110426-7337 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: exit status 90 (1m36.586653497s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-20220601110426-7337] minikube v1.26.0-beta.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=14079
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Kubernetes 1.23.6 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.23.6
	* Using the kvm2 driver based on existing profile
	* Starting control plane node stopped-upgrade-20220601110426-7337 in cluster stopped-upgrade-20220601110426-7337
	* Restarting existing kvm2 VM for "stopped-upgrade-20220601110426-7337" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0601 11:07:22.104063   23290 out.go:296] Setting OutFile to fd 1 ...
	I0601 11:07:22.104226   23290 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:07:22.104233   23290 out.go:309] Setting ErrFile to fd 2...
	I0601 11:07:22.104241   23290 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:07:22.104454   23290 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/bin
	I0601 11:07:22.104865   23290 out.go:303] Setting JSON to false
	I0601 11:07:22.106219   23290 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2996,"bootTime":1654078646,"procs":256,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1027-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0601 11:07:22.106349   23290 start.go:125] virtualization: kvm guest
	I0601 11:07:22.109019   23290 out.go:177] * [stopped-upgrade-20220601110426-7337] minikube v1.26.0-beta.1 on Ubuntu 20.04 (kvm/amd64)
	I0601 11:07:22.110828   23290 out.go:177]   - MINIKUBE_LOCATION=14079
	I0601 11:07:22.110763   23290 notify.go:193] Checking for updates...
	I0601 11:07:22.112364   23290 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0601 11:07:22.113889   23290 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 11:07:22.115394   23290 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube
	I0601 11:07:22.116847   23290 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0601 11:07:22.118796   23290 config.go:178] Loaded profile config "stopped-upgrade-20220601110426-7337": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0601 11:07:22.119372   23290 main.go:134] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0601 11:07:22.119426   23290 main.go:134] libmachine: Launching plugin server for driver kvm2
	I0601 11:07:22.139650   23290 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:38885
	I0601 11:07:22.140391   23290 main.go:134] libmachine: () Calling .GetVersion
	I0601 11:07:22.141066   23290 main.go:134] libmachine: Using API Version  1
	I0601 11:07:22.141093   23290 main.go:134] libmachine: () Calling .SetConfigRaw
	I0601 11:07:22.141507   23290 main.go:134] libmachine: () Calling .GetMachineName
	I0601 11:07:22.141685   23290 main.go:134] libmachine: (stopped-upgrade-20220601110426-7337) Calling .DriverName
	I0601 11:07:22.144294   23290 out.go:177] * Kubernetes 1.23.6 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.23.6
	I0601 11:07:22.145852   23290 driver.go:358] Setting default libvirt URI to qemu:///system
	I0601 11:07:22.146177   23290 main.go:134] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0601 11:07:22.146220   23290 main.go:134] libmachine: Launching plugin server for driver kvm2
	I0601 11:07:22.165735   23290 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:44455
	I0601 11:07:22.166203   23290 main.go:134] libmachine: () Calling .GetVersion
	I0601 11:07:22.166709   23290 main.go:134] libmachine: Using API Version  1
	I0601 11:07:22.166725   23290 main.go:134] libmachine: () Calling .SetConfigRaw
	I0601 11:07:22.167021   23290 main.go:134] libmachine: () Calling .GetMachineName
	I0601 11:07:22.167135   23290 main.go:134] libmachine: (stopped-upgrade-20220601110426-7337) Calling .DriverName
	I0601 11:07:22.222737   23290 out.go:177] * Using the kvm2 driver based on existing profile
	I0601 11:07:22.224716   23290 start.go:284] selected driver: kvm2
	I0601 11:07:22.224732   23290 start.go:806] validating driver "kvm2" against &{Name:stopped-upgrade-20220601110426-7337 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.16.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.15-snapshot4@sha256:ef1f485b5a1cfa4c989bc05e153f0a8525968ec999e242efff871cbb31649c16 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterNa
me:stopped-upgrade-20220601110426-7337 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.105.209 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptim
izations:false DisableMetrics:false}
	I0601 11:07:22.224860   23290 start.go:817] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0601 11:07:22.225568   23290 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 11:07:22.225733   23290 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0601 11:07:22.242149   23290 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.26.0-beta.1
	I0601 11:07:22.242568   23290 cni.go:95] Creating CNI manager for ""
	I0601 11:07:22.242590   23290 cni.go:165] "kvm2" driver + containerd runtime found, recommending bridge
	I0601 11:07:22.242600   23290 start_flags.go:306] config:
	{Name:stopped-upgrade-20220601110426-7337 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.16.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.15-snapshot4@sha256:ef1f485b5a1cfa4c989bc05e153f0a8525968ec999e242efff871cbb31649c16 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:stopped-upgrade-20220601110426-7337 Namespace:default APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.105.209 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 11:07:22.242757   23290 iso.go:128] acquiring lock: {Name:mkad95a9aa9919c9e63cafd3e91a2bd2bcafb74e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 11:07:22.267026   23290 out.go:177] * Starting control plane node stopped-upgrade-20220601110426-7337 in cluster stopped-upgrade-20220601110426-7337
	I0601 11:07:22.271431   23290 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0601 11:07:22.271548   23290 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4
	I0601 11:07:22.271576   23290 cache.go:57] Caching tarball of preloaded images
	I0601 11:07:22.271731   23290 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0601 11:07:22.271759   23290 cache.go:60] Finished verifying existence of preloaded tar for  v1.20.0 on containerd
	I0601 11:07:22.271931   23290 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/stopped-upgrade-20220601110426-7337/config.json ...
	I0601 11:07:22.272141   23290 cache.go:206] Successfully downloaded all kic artifacts
	I0601 11:07:22.272173   23290 start.go:352] acquiring machines lock for stopped-upgrade-20220601110426-7337: {Name:mk996831bcc8315bce9654ddce127329929e96ab Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0601 11:07:33.332086   23290 start.go:356] acquired machines lock for "stopped-upgrade-20220601110426-7337" in 11.059887219s
	I0601 11:07:33.332124   23290 start.go:94] Skipping create...Using existing machine configuration
	I0601 11:07:33.332136   23290 fix.go:55] fixHost starting: 
	I0601 11:07:33.332529   23290 main.go:134] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0601 11:07:33.332582   23290 main.go:134] libmachine: Launching plugin server for driver kvm2
	I0601 11:07:33.349045   23290 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:34709
	I0601 11:07:33.349414   23290 main.go:134] libmachine: () Calling .GetVersion
	I0601 11:07:33.349882   23290 main.go:134] libmachine: Using API Version  1
	I0601 11:07:33.349912   23290 main.go:134] libmachine: () Calling .SetConfigRaw
	I0601 11:07:33.350281   23290 main.go:134] libmachine: () Calling .GetMachineName
	I0601 11:07:33.350461   23290 main.go:134] libmachine: (stopped-upgrade-20220601110426-7337) Calling .DriverName
	I0601 11:07:33.350635   23290 main.go:134] libmachine: (stopped-upgrade-20220601110426-7337) Calling .GetState
	I0601 11:07:33.352053   23290 fix.go:103] recreateIfNeeded on stopped-upgrade-20220601110426-7337: state=Stopped err=<nil>
	I0601 11:07:33.352082   23290 main.go:134] libmachine: (stopped-upgrade-20220601110426-7337) Calling .DriverName
	W0601 11:07:33.352239   23290 fix.go:129] unexpected machine state, will restart: <nil>
	I0601 11:07:33.354528   23290 out.go:177] * Restarting existing kvm2 VM for "stopped-upgrade-20220601110426-7337" ...
	I0601 11:07:33.355857   23290 main.go:134] libmachine: (stopped-upgrade-20220601110426-7337) Calling .Start
	I0601 11:07:33.355995   23290 main.go:134] libmachine: (stopped-upgrade-20220601110426-7337) Ensuring networks are active...
	I0601 11:07:33.356676   23290 main.go:134] libmachine: (stopped-upgrade-20220601110426-7337) Ensuring network default is active
	I0601 11:07:33.357036   23290 main.go:134] libmachine: (stopped-upgrade-20220601110426-7337) Ensuring network minikube-net is active
	I0601 11:07:33.357523   23290 main.go:134] libmachine: (stopped-upgrade-20220601110426-7337) Getting domain xml...
	I0601 11:07:33.358190   23290 main.go:134] libmachine: (stopped-upgrade-20220601110426-7337) Creating domain...
	I0601 11:07:34.645585   23290 main.go:134] libmachine: (stopped-upgrade-20220601110426-7337) Waiting to get IP...
	I0601 11:07:34.646642   23290 main.go:134] libmachine: (stopped-upgrade-20220601110426-7337) DBG | domain stopped-upgrade-20220601110426-7337 has defined MAC address 52:54:00:f5:49:4e in network minikube-net
	I0601 11:07:34.647154   23290 main.go:134] libmachine: (stopped-upgrade-20220601110426-7337) DBG | unable to find current IP address of domain stopped-upgrade-20220601110426-7337 in network minikube-net
	I0601 11:07:34.647237   23290 main.go:134] libmachine: (stopped-upgrade-20220601110426-7337) DBG | I0601 11:07:34.647131   23375 retry.go:31] will retry after 263.082536ms: waiting for machine to come up
	I0601 11:07:34.911809   23290 main.go:134] libmachine: (stopped-upgrade-20220601110426-7337) DBG | domain stopped-upgrade-20220601110426-7337 has defined MAC address 52:54:00:f5:49:4e in network minikube-net
	I0601 11:07:34.912302   23290 main.go:134] libmachine: (stopped-upgrade-20220601110426-7337) DBG | unable to find current IP address of domain stopped-upgrade-20220601110426-7337 in network minikube-net
	I0601 11:07:34.912330   23290 main.go:134] libmachine: (stopped-upgrade-20220601110426-7337) DBG | I0601 11:07:34.912260   23375 retry.go:31] will retry after 381.329545ms: waiting for machine to come up
	I0601 11:07:35.294808   23290 main.go:134] libmachine: (stopped-upgrade-20220601110426-7337) DBG | domain stopped-upgrade-20220601110426-7337 has defined MAC address 52:54:00:f5:49:4e in network minikube-net
	I0601 11:07:35.295262   23290 main.go:134] libmachine: (stopped-upgrade-20220601110426-7337) DBG | unable to find current IP address of domain stopped-upgrade-20220601110426-7337 in network minikube-net
	I0601 11:07:35.295291   23290 main.go:134] libmachine: (stopped-upgrade-20220601110426-7337) DBG | I0601 11:07:35.295202   23375 retry.go:31] will retry after 422.765636ms: waiting for machine to come up
	I0601 11:07:35.719837   23290 main.go:134] libmachine: (stopped-upgrade-20220601110426-7337) DBG | domain stopped-upgrade-20220601110426-7337 has defined MAC address 52:54:00:f5:49:4e in network minikube-net
	I0601 11:07:35.720347   23290 main.go:134] libmachine: (stopped-upgrade-20220601110426-7337) DBG | unable to find current IP address of domain stopped-upgrade-20220601110426-7337 in network minikube-net
	I0601 11:07:35.720394   23290 main.go:134] libmachine: (stopped-upgrade-20220601110426-7337) DBG | I0601 11:07:35.720297   23375 retry.go:31] will retry after 473.074753ms: waiting for machine to come up
	I0601 11:07:36.194848   23290 main.go:134] libmachine: (stopped-upgrade-20220601110426-7337) DBG | domain stopped-upgrade-20220601110426-7337 has defined MAC address 52:54:00:f5:49:4e in network minikube-net
	I0601 11:07:36.195318   23290 main.go:134] libmachine: (stopped-upgrade-20220601110426-7337) DBG | unable to find current IP address of domain stopped-upgrade-20220601110426-7337 in network minikube-net
	I0601 11:07:36.195350   23290 main.go:134] libmachine: (stopped-upgrade-20220601110426-7337) DBG | I0601 11:07:36.195264   23375 retry.go:31] will retry after 587.352751ms: waiting for machine to come up
	I0601 11:07:36.783958   23290 main.go:134] libmachine: (stopped-upgrade-20220601110426-7337) DBG | domain stopped-upgrade-20220601110426-7337 has defined MAC address 52:54:00:f5:49:4e in network minikube-net
	I0601 11:07:36.784524   23290 main.go:134] libmachine: (stopped-upgrade-20220601110426-7337) DBG | unable to find current IP address of domain stopped-upgrade-20220601110426-7337 in network minikube-net
	I0601 11:07:36.784549   23290 main.go:134] libmachine: (stopped-upgrade-20220601110426-7337) DBG | I0601 11:07:36.784475   23375 retry.go:31] will retry after 834.206799ms: waiting for machine to come up
	I0601 11:07:37.620454   23290 main.go:134] libmachine: (stopped-upgrade-20220601110426-7337) DBG | domain stopped-upgrade-20220601110426-7337 has defined MAC address 52:54:00:f5:49:4e in network minikube-net
	I0601 11:07:37.621113   23290 main.go:134] libmachine: (stopped-upgrade-20220601110426-7337) DBG | unable to find current IP address of domain stopped-upgrade-20220601110426-7337 in network minikube-net
	I0601 11:07:37.621142   23290 main.go:134] libmachine: (stopped-upgrade-20220601110426-7337) DBG | I0601 11:07:37.621036   23375 retry.go:31] will retry after 746.553905ms: waiting for machine to come up
	I0601 11:07:38.369726   23290 main.go:134] libmachine: (stopped-upgrade-20220601110426-7337) DBG | domain stopped-upgrade-20220601110426-7337 has defined MAC address 52:54:00:f5:49:4e in network minikube-net
	I0601 11:07:38.370448   23290 main.go:134] libmachine: (stopped-upgrade-20220601110426-7337) DBG | unable to find current IP address of domain stopped-upgrade-20220601110426-7337 in network minikube-net
	I0601 11:07:38.370484   23290 main.go:134] libmachine: (stopped-upgrade-20220601110426-7337) DBG | I0601 11:07:38.370379   23375 retry.go:31] will retry after 987.362415ms: waiting for machine to come up
	I0601 11:07:39.359584   23290 main.go:134] libmachine: (stopped-upgrade-20220601110426-7337) DBG | domain stopped-upgrade-20220601110426-7337 has defined MAC address 52:54:00:f5:49:4e in network minikube-net
	I0601 11:07:39.360131   23290 main.go:134] libmachine: (stopped-upgrade-20220601110426-7337) DBG | unable to find current IP address of domain stopped-upgrade-20220601110426-7337 in network minikube-net
	I0601 11:07:39.360166   23290 main.go:134] libmachine: (stopped-upgrade-20220601110426-7337) DBG | I0601 11:07:39.360078   23375 retry.go:31] will retry after 1.189835008s: waiting for machine to come up
	I0601 11:07:40.551297   23290 main.go:134] libmachine: (stopped-upgrade-20220601110426-7337) DBG | domain stopped-upgrade-20220601110426-7337 has defined MAC address 52:54:00:f5:49:4e in network minikube-net
	I0601 11:07:40.551901   23290 main.go:134] libmachine: (stopped-upgrade-20220601110426-7337) DBG | unable to find current IP address of domain stopped-upgrade-20220601110426-7337 in network minikube-net
	I0601 11:07:40.551940   23290 main.go:134] libmachine: (stopped-upgrade-20220601110426-7337) DBG | I0601 11:07:40.551807   23375 retry.go:31] will retry after 1.677229867s: waiting for machine to come up
	I0601 11:07:42.231798   23290 main.go:134] libmachine: (stopped-upgrade-20220601110426-7337) DBG | domain stopped-upgrade-20220601110426-7337 has defined MAC address 52:54:00:f5:49:4e in network minikube-net
	I0601 11:07:42.232219   23290 main.go:134] libmachine: (stopped-upgrade-20220601110426-7337) DBG | unable to find current IP address of domain stopped-upgrade-20220601110426-7337 in network minikube-net
	I0601 11:07:42.232243   23290 main.go:134] libmachine: (stopped-upgrade-20220601110426-7337) DBG | I0601 11:07:42.232144   23375 retry.go:31] will retry after 2.346016261s: waiting for machine to come up
	I0601 11:07:44.580128   23290 main.go:134] libmachine: (stopped-upgrade-20220601110426-7337) DBG | domain stopped-upgrade-20220601110426-7337 has defined MAC address 52:54:00:f5:49:4e in network minikube-net
	I0601 11:07:44.580687   23290 main.go:134] libmachine: (stopped-upgrade-20220601110426-7337) DBG | unable to find current IP address of domain stopped-upgrade-20220601110426-7337 in network minikube-net
	I0601 11:07:44.580714   23290 main.go:134] libmachine: (stopped-upgrade-20220601110426-7337) DBG | I0601 11:07:44.580641   23375 retry.go:31] will retry after 3.36678925s: waiting for machine to come up
	I0601 11:07:47.948885   23290 main.go:134] libmachine: (stopped-upgrade-20220601110426-7337) DBG | domain stopped-upgrade-20220601110426-7337 has defined MAC address 52:54:00:f5:49:4e in network minikube-net
	I0601 11:07:47.949442   23290 main.go:134] libmachine: (stopped-upgrade-20220601110426-7337) DBG | unable to find current IP address of domain stopped-upgrade-20220601110426-7337 in network minikube-net
	I0601 11:07:47.949469   23290 main.go:134] libmachine: (stopped-upgrade-20220601110426-7337) DBG | I0601 11:07:47.949400   23375 retry.go:31] will retry after 3.11822781s: waiting for machine to come up
	I0601 11:07:51.069373   23290 main.go:134] libmachine: (stopped-upgrade-20220601110426-7337) DBG | domain stopped-upgrade-20220601110426-7337 has defined MAC address 52:54:00:f5:49:4e in network minikube-net
	I0601 11:07:51.069909   23290 main.go:134] libmachine: (stopped-upgrade-20220601110426-7337) Found IP for machine: 192.168.105.209
	I0601 11:07:51.069940   23290 main.go:134] libmachine: (stopped-upgrade-20220601110426-7337) Reserving static IP address...
	I0601 11:07:51.069959   23290 main.go:134] libmachine: (stopped-upgrade-20220601110426-7337) DBG | domain stopped-upgrade-20220601110426-7337 has current primary IP address 192.168.105.209 and MAC address 52:54:00:f5:49:4e in network minikube-net
	I0601 11:07:51.070428   23290 main.go:134] libmachine: (stopped-upgrade-20220601110426-7337) DBG | found host DHCP lease matching {name: "stopped-upgrade-20220601110426-7337", mac: "52:54:00:f5:49:4e", ip: "192.168.105.209"} in network minikube-net: {Iface:virbr7 ExpiryTime:2022-06-01 12:07:44 +0000 UTC Type:0 Mac:52:54:00:f5:49:4e Iaid: IPaddr:192.168.105.209 Prefix:24 Hostname:stopped-upgrade-20220601110426-7337 Clientid:01:52:54:00:f5:49:4e}
	I0601 11:07:51.070462   23290 main.go:134] libmachine: (stopped-upgrade-20220601110426-7337) Reserved static IP address: 192.168.105.209
	I0601 11:07:51.070490   23290 main.go:134] libmachine: (stopped-upgrade-20220601110426-7337) DBG | skip adding static IP to network minikube-net - found existing host DHCP lease matching {name: "stopped-upgrade-20220601110426-7337", mac: "52:54:00:f5:49:4e", ip: "192.168.105.209"}
	I0601 11:07:51.070524   23290 main.go:134] libmachine: (stopped-upgrade-20220601110426-7337) DBG | Getting to WaitForSSH function...
	I0601 11:07:51.070544   23290 main.go:134] libmachine: (stopped-upgrade-20220601110426-7337) Waiting for SSH to be available...
	I0601 11:07:51.072600   23290 main.go:134] libmachine: (stopped-upgrade-20220601110426-7337) DBG | domain stopped-upgrade-20220601110426-7337 has defined MAC address 52:54:00:f5:49:4e in network minikube-net
	I0601 11:07:51.072981   23290 main.go:134] libmachine: (stopped-upgrade-20220601110426-7337) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:49:4e", ip: ""} in network minikube-net: {Iface:virbr7 ExpiryTime:2022-06-01 12:07:44 +0000 UTC Type:0 Mac:52:54:00:f5:49:4e Iaid: IPaddr:192.168.105.209 Prefix:24 Hostname:stopped-upgrade-20220601110426-7337 Clientid:01:52:54:00:f5:49:4e}
	I0601 11:07:51.073007   23290 main.go:134] libmachine: (stopped-upgrade-20220601110426-7337) DBG | domain stopped-upgrade-20220601110426-7337 has defined IP address 192.168.105.209 and MAC address 52:54:00:f5:49:4e in network minikube-net
	I0601 11:07:51.073110   23290 main.go:134] libmachine: (stopped-upgrade-20220601110426-7337) DBG | Using SSH client type: external
	I0601 11:07:51.073142   23290 main.go:134] libmachine: (stopped-upgrade-20220601110426-7337) DBG | Using SSH private key: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/stopped-upgrade-20220601110426-7337/id_rsa (-rw-------)
	I0601 11:07:51.073173   23290 main.go:134] libmachine: (stopped-upgrade-20220601110426-7337) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.105.209 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/stopped-upgrade-20220601110426-7337/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0601 11:07:51.073202   23290 main.go:134] libmachine: (stopped-upgrade-20220601110426-7337) DBG | About to run SSH command:
	I0601 11:07:51.073232   23290 main.go:134] libmachine: (stopped-upgrade-20220601110426-7337) DBG | exit 0
	I0601 11:07:51.207195   23290 main.go:134] libmachine: (stopped-upgrade-20220601110426-7337) DBG | SSH cmd err, output: <nil>: 
	I0601 11:07:51.207562   23290 main.go:134] libmachine: (stopped-upgrade-20220601110426-7337) Calling .GetConfigRaw
	I0601 11:07:51.208288   23290 main.go:134] libmachine: (stopped-upgrade-20220601110426-7337) Calling .GetIP
	I0601 11:07:51.211253   23290 main.go:134] libmachine: (stopped-upgrade-20220601110426-7337) DBG | domain stopped-upgrade-20220601110426-7337 has defined MAC address 52:54:00:f5:49:4e in network minikube-net
	I0601 11:07:51.211602   23290 main.go:134] libmachine: (stopped-upgrade-20220601110426-7337) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:49:4e", ip: ""} in network minikube-net: {Iface:virbr7 ExpiryTime:2022-06-01 12:07:44 +0000 UTC Type:0 Mac:52:54:00:f5:49:4e Iaid: IPaddr:192.168.105.209 Prefix:24 Hostname:stopped-upgrade-20220601110426-7337 Clientid:01:52:54:00:f5:49:4e}
	I0601 11:07:51.211636   23290 main.go:134] libmachine: (stopped-upgrade-20220601110426-7337) DBG | domain stopped-upgrade-20220601110426-7337 has defined IP address 192.168.105.209 and MAC address 52:54:00:f5:49:4e in network minikube-net
	I0601 11:07:51.211866   23290 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/stopped-upgrade-20220601110426-7337/config.json ...
	I0601 11:07:51.212051   23290 machine.go:88] provisioning docker machine ...
	I0601 11:07:51.212073   23290 main.go:134] libmachine: (stopped-upgrade-20220601110426-7337) Calling .DriverName
	I0601 11:07:51.212286   23290 main.go:134] libmachine: (stopped-upgrade-20220601110426-7337) Calling .GetMachineName
	I0601 11:07:51.212481   23290 buildroot.go:166] provisioning hostname "stopped-upgrade-20220601110426-7337"
	I0601 11:07:51.212502   23290 main.go:134] libmachine: (stopped-upgrade-20220601110426-7337) Calling .GetMachineName
	I0601 11:07:51.212652   23290 main.go:134] libmachine: (stopped-upgrade-20220601110426-7337) Calling .GetSSHHostname
	I0601 11:07:51.214974   23290 main.go:134] libmachine: (stopped-upgrade-20220601110426-7337) DBG | domain stopped-upgrade-20220601110426-7337 has defined MAC address 52:54:00:f5:49:4e in network minikube-net
	I0601 11:07:51.215343   23290 main.go:134] libmachine: (stopped-upgrade-20220601110426-7337) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:49:4e", ip: ""} in network minikube-net: {Iface:virbr7 ExpiryTime:2022-06-01 12:07:44 +0000 UTC Type:0 Mac:52:54:00:f5:49:4e Iaid: IPaddr:192.168.105.209 Prefix:24 Hostname:stopped-upgrade-20220601110426-7337 Clientid:01:52:54:00:f5:49:4e}
	I0601 11:07:51.215388   23290 main.go:134] libmachine: (stopped-upgrade-20220601110426-7337) DBG | domain stopped-upgrade-20220601110426-7337 has defined IP address 192.168.105.209 and MAC address 52:54:00:f5:49:4e in network minikube-net
	I0601 11:07:51.215534   23290 main.go:134] libmachine: (stopped-upgrade-20220601110426-7337) Calling .GetSSHPort
	I0601 11:07:51.215721   23290 main.go:134] libmachine: (stopped-upgrade-20220601110426-7337) Calling .GetSSHKeyPath
	I0601 11:07:51.215876   23290 main.go:134] libmachine: (stopped-upgrade-20220601110426-7337) Calling .GetSSHKeyPath
	I0601 11:07:51.216014   23290 main.go:134] libmachine: (stopped-upgrade-20220601110426-7337) Calling .GetSSHUsername
	I0601 11:07:51.216183   23290 main.go:134] libmachine: Using SSH client type: native
	I0601 11:07:51.216360   23290 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da240] 0x7dd2a0 <nil>  [] 0s} 192.168.105.209 22 <nil> <nil>}
	I0601 11:07:51.216382   23290 main.go:134] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-20220601110426-7337 && echo "stopped-upgrade-20220601110426-7337" | sudo tee /etc/hostname
	I0601 11:07:51.349780   23290 main.go:134] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-20220601110426-7337
	
	I0601 11:07:51.349806   23290 main.go:134] libmachine: (stopped-upgrade-20220601110426-7337) Calling .GetSSHHostname
	I0601 11:07:51.352628   23290 main.go:134] libmachine: (stopped-upgrade-20220601110426-7337) DBG | domain stopped-upgrade-20220601110426-7337 has defined MAC address 52:54:00:f5:49:4e in network minikube-net
	I0601 11:07:51.353042   23290 main.go:134] libmachine: (stopped-upgrade-20220601110426-7337) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:49:4e", ip: ""} in network minikube-net: {Iface:virbr7 ExpiryTime:2022-06-01 12:07:44 +0000 UTC Type:0 Mac:52:54:00:f5:49:4e Iaid: IPaddr:192.168.105.209 Prefix:24 Hostname:stopped-upgrade-20220601110426-7337 Clientid:01:52:54:00:f5:49:4e}
	I0601 11:07:51.353076   23290 main.go:134] libmachine: (stopped-upgrade-20220601110426-7337) DBG | domain stopped-upgrade-20220601110426-7337 has defined IP address 192.168.105.209 and MAC address 52:54:00:f5:49:4e in network minikube-net
	I0601 11:07:51.353247   23290 main.go:134] libmachine: (stopped-upgrade-20220601110426-7337) Calling .GetSSHPort
	I0601 11:07:51.353461   23290 main.go:134] libmachine: (stopped-upgrade-20220601110426-7337) Calling .GetSSHKeyPath
	I0601 11:07:51.353627   23290 main.go:134] libmachine: (stopped-upgrade-20220601110426-7337) Calling .GetSSHKeyPath
	I0601 11:07:51.353769   23290 main.go:134] libmachine: (stopped-upgrade-20220601110426-7337) Calling .GetSSHUsername
	I0601 11:07:51.353970   23290 main.go:134] libmachine: Using SSH client type: native
	I0601 11:07:51.354137   23290 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da240] 0x7dd2a0 <nil>  [] 0s} 192.168.105.209 22 <nil> <nil>}
	I0601 11:07:51.354168   23290 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-20220601110426-7337' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-20220601110426-7337/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-20220601110426-7337' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0601 11:07:51.483622   23290 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0601 11:07:51.483650   23290 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/key.
pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube}
	I0601 11:07:51.483686   23290 buildroot.go:174] setting up certificates
	I0601 11:07:51.483697   23290 provision.go:83] configureAuth start
	I0601 11:07:51.483711   23290 main.go:134] libmachine: (stopped-upgrade-20220601110426-7337) Calling .GetMachineName
	I0601 11:07:51.484018   23290 main.go:134] libmachine: (stopped-upgrade-20220601110426-7337) Calling .GetIP
	I0601 11:07:51.486780   23290 main.go:134] libmachine: (stopped-upgrade-20220601110426-7337) DBG | domain stopped-upgrade-20220601110426-7337 has defined MAC address 52:54:00:f5:49:4e in network minikube-net
	I0601 11:07:51.487204   23290 main.go:134] libmachine: (stopped-upgrade-20220601110426-7337) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:49:4e", ip: ""} in network minikube-net: {Iface:virbr7 ExpiryTime:2022-06-01 12:07:44 +0000 UTC Type:0 Mac:52:54:00:f5:49:4e Iaid: IPaddr:192.168.105.209 Prefix:24 Hostname:stopped-upgrade-20220601110426-7337 Clientid:01:52:54:00:f5:49:4e}
	I0601 11:07:51.487240   23290 main.go:134] libmachine: (stopped-upgrade-20220601110426-7337) DBG | domain stopped-upgrade-20220601110426-7337 has defined IP address 192.168.105.209 and MAC address 52:54:00:f5:49:4e in network minikube-net
	I0601 11:07:51.487376   23290 main.go:134] libmachine: (stopped-upgrade-20220601110426-7337) Calling .GetSSHHostname
	I0601 11:07:51.489624   23290 main.go:134] libmachine: (stopped-upgrade-20220601110426-7337) DBG | domain stopped-upgrade-20220601110426-7337 has defined MAC address 52:54:00:f5:49:4e in network minikube-net
	I0601 11:07:51.489988   23290 main.go:134] libmachine: (stopped-upgrade-20220601110426-7337) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:49:4e", ip: ""} in network minikube-net: {Iface:virbr7 ExpiryTime:2022-06-01 12:07:44 +0000 UTC Type:0 Mac:52:54:00:f5:49:4e Iaid: IPaddr:192.168.105.209 Prefix:24 Hostname:stopped-upgrade-20220601110426-7337 Clientid:01:52:54:00:f5:49:4e}
	I0601 11:07:51.490022   23290 main.go:134] libmachine: (stopped-upgrade-20220601110426-7337) DBG | domain stopped-upgrade-20220601110426-7337 has defined IP address 192.168.105.209 and MAC address 52:54:00:f5:49:4e in network minikube-net
	I0601 11:07:51.490115   23290 provision.go:138] copyHostCerts
	I0601 11:07:51.490168   23290 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem, removing ...
	I0601 11:07:51.490183   23290 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem
	I0601 11:07:51.490236   23290 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem (1082 bytes)
	I0601 11:07:51.490315   23290 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem, removing ...
	I0601 11:07:51.490324   23290 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem
	I0601 11:07:51.490344   23290 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem (1123 bytes)
	I0601 11:07:51.490388   23290 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem, removing ...
	I0601 11:07:51.490398   23290 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem
	I0601 11:07:51.490416   23290 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem (1675 bytes)
	I0601 11:07:51.490456   23290 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-20220601110426-7337 san=[192.168.105.209 192.168.105.209 localhost 127.0.0.1 minikube stopped-upgrade-20220601110426-7337]
	I0601 11:07:51.755056   23290 provision.go:172] copyRemoteCerts
	I0601 11:07:51.755151   23290 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0601 11:07:51.755186   23290 main.go:134] libmachine: (stopped-upgrade-20220601110426-7337) Calling .GetSSHHostname
	I0601 11:07:51.758099   23290 main.go:134] libmachine: (stopped-upgrade-20220601110426-7337) DBG | domain stopped-upgrade-20220601110426-7337 has defined MAC address 52:54:00:f5:49:4e in network minikube-net
	I0601 11:07:51.758481   23290 main.go:134] libmachine: (stopped-upgrade-20220601110426-7337) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:49:4e", ip: ""} in network minikube-net: {Iface:virbr7 ExpiryTime:2022-06-01 12:07:44 +0000 UTC Type:0 Mac:52:54:00:f5:49:4e Iaid: IPaddr:192.168.105.209 Prefix:24 Hostname:stopped-upgrade-20220601110426-7337 Clientid:01:52:54:00:f5:49:4e}
	I0601 11:07:51.758513   23290 main.go:134] libmachine: (stopped-upgrade-20220601110426-7337) DBG | domain stopped-upgrade-20220601110426-7337 has defined IP address 192.168.105.209 and MAC address 52:54:00:f5:49:4e in network minikube-net
	I0601 11:07:51.758708   23290 main.go:134] libmachine: (stopped-upgrade-20220601110426-7337) Calling .GetSSHPort
	I0601 11:07:51.758921   23290 main.go:134] libmachine: (stopped-upgrade-20220601110426-7337) Calling .GetSSHKeyPath
	I0601 11:07:51.759116   23290 main.go:134] libmachine: (stopped-upgrade-20220601110426-7337) Calling .GetSSHUsername
	I0601 11:07:51.759268   23290 sshutil.go:53] new ssh client: &{IP:192.168.105.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/stopped-upgrade-20220601110426-7337/id_rsa Username:docker}
	I0601 11:07:51.853694   23290 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0601 11:07:51.867225   23290 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem --> /etc/docker/server.pem (1277 bytes)
	I0601 11:07:51.882315   23290 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0601 11:07:51.896814   23290 provision.go:86] duration metric: configureAuth took 413.105333ms
	I0601 11:07:51.896836   23290 buildroot.go:189] setting minikube options for container-runtime
	I0601 11:07:51.897018   23290 config.go:178] Loaded profile config "stopped-upgrade-20220601110426-7337": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0601 11:07:51.897032   23290 machine.go:91] provisioned docker machine in 684.968468ms
	I0601 11:07:51.897039   23290 start.go:306] post-start starting for "stopped-upgrade-20220601110426-7337" (driver="kvm2")
	I0601 11:07:51.897045   23290 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0601 11:07:51.897102   23290 main.go:134] libmachine: (stopped-upgrade-20220601110426-7337) Calling .DriverName
	I0601 11:07:51.897372   23290 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0601 11:07:51.897398   23290 main.go:134] libmachine: (stopped-upgrade-20220601110426-7337) Calling .GetSSHHostname
	I0601 11:07:51.900302   23290 main.go:134] libmachine: (stopped-upgrade-20220601110426-7337) DBG | domain stopped-upgrade-20220601110426-7337 has defined MAC address 52:54:00:f5:49:4e in network minikube-net
	I0601 11:07:51.900669   23290 main.go:134] libmachine: (stopped-upgrade-20220601110426-7337) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:49:4e", ip: ""} in network minikube-net: {Iface:virbr7 ExpiryTime:2022-06-01 12:07:44 +0000 UTC Type:0 Mac:52:54:00:f5:49:4e Iaid: IPaddr:192.168.105.209 Prefix:24 Hostname:stopped-upgrade-20220601110426-7337 Clientid:01:52:54:00:f5:49:4e}
	I0601 11:07:51.900710   23290 main.go:134] libmachine: (stopped-upgrade-20220601110426-7337) DBG | domain stopped-upgrade-20220601110426-7337 has defined IP address 192.168.105.209 and MAC address 52:54:00:f5:49:4e in network minikube-net
	I0601 11:07:51.900870   23290 main.go:134] libmachine: (stopped-upgrade-20220601110426-7337) Calling .GetSSHPort
	I0601 11:07:51.901029   23290 main.go:134] libmachine: (stopped-upgrade-20220601110426-7337) Calling .GetSSHKeyPath
	I0601 11:07:51.901173   23290 main.go:134] libmachine: (stopped-upgrade-20220601110426-7337) Calling .GetSSHUsername
	I0601 11:07:51.901346   23290 sshutil.go:53] new ssh client: &{IP:192.168.105.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/stopped-upgrade-20220601110426-7337/id_rsa Username:docker}
	I0601 11:07:51.991095   23290 ssh_runner.go:195] Run: cat /etc/os-release
	I0601 11:07:51.995560   23290 info.go:137] Remote host: Buildroot 2020.02.8
	I0601 11:07:51.995588   23290 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/addons for local assets ...
	I0601 11:07:51.995654   23290 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files for local assets ...
	I0601 11:07:51.995753   23290 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/73372.pem -> 73372.pem in /etc/ssl/certs
	I0601 11:07:51.995869   23290 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0601 11:07:52.002168   23290 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/73372.pem --> /etc/ssl/certs/73372.pem (1708 bytes)
	I0601 11:07:52.017886   23290 start.go:309] post-start completed in 120.837726ms
	I0601 11:07:52.017908   23290 fix.go:57] fixHost completed within 18.685773324s
	I0601 11:07:52.017927   23290 main.go:134] libmachine: (stopped-upgrade-20220601110426-7337) Calling .GetSSHHostname
	I0601 11:07:52.021281   23290 main.go:134] libmachine: (stopped-upgrade-20220601110426-7337) DBG | domain stopped-upgrade-20220601110426-7337 has defined MAC address 52:54:00:f5:49:4e in network minikube-net
	I0601 11:07:52.021752   23290 main.go:134] libmachine: (stopped-upgrade-20220601110426-7337) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:49:4e", ip: ""} in network minikube-net: {Iface:virbr7 ExpiryTime:2022-06-01 12:07:44 +0000 UTC Type:0 Mac:52:54:00:f5:49:4e Iaid: IPaddr:192.168.105.209 Prefix:24 Hostname:stopped-upgrade-20220601110426-7337 Clientid:01:52:54:00:f5:49:4e}
	I0601 11:07:52.021782   23290 main.go:134] libmachine: (stopped-upgrade-20220601110426-7337) DBG | domain stopped-upgrade-20220601110426-7337 has defined IP address 192.168.105.209 and MAC address 52:54:00:f5:49:4e in network minikube-net
	I0601 11:07:52.022033   23290 main.go:134] libmachine: (stopped-upgrade-20220601110426-7337) Calling .GetSSHPort
	I0601 11:07:52.022226   23290 main.go:134] libmachine: (stopped-upgrade-20220601110426-7337) Calling .GetSSHKeyPath
	I0601 11:07:52.022389   23290 main.go:134] libmachine: (stopped-upgrade-20220601110426-7337) Calling .GetSSHKeyPath
	I0601 11:07:52.022514   23290 main.go:134] libmachine: (stopped-upgrade-20220601110426-7337) Calling .GetSSHUsername
	I0601 11:07:52.022722   23290 main.go:134] libmachine: Using SSH client type: native
	I0601 11:07:52.022862   23290 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da240] 0x7dd2a0 <nil>  [] 0s} 192.168.105.209 22 <nil> <nil>}
	I0601 11:07:52.022875   23290 main.go:134] libmachine: About to run SSH command:
	date +%s.%N
	I0601 11:07:52.160167   23290 main.go:134] libmachine: SSH cmd err, output: <nil>: 1654081672.128590339
	
	I0601 11:07:52.160195   23290 fix.go:207] guest clock: 1654081672.128590339
	I0601 11:07:52.160205   23290 fix.go:220] Guest: 2022-06-01 11:07:52.128590339 +0000 UTC Remote: 2022-06-01 11:07:52.0179116 +0000 UTC m=+29.984302993 (delta=110.678739ms)
	I0601 11:07:52.160234   23290 fix.go:191] guest clock delta is within tolerance: 110.678739ms
	I0601 11:07:52.160240   23290 start.go:81] releasing machines lock for "stopped-upgrade-20220601110426-7337", held for 18.828132202s
	I0601 11:07:52.160285   23290 main.go:134] libmachine: (stopped-upgrade-20220601110426-7337) Calling .DriverName
	I0601 11:07:52.160598   23290 main.go:134] libmachine: (stopped-upgrade-20220601110426-7337) Calling .GetIP
	I0601 11:07:52.163841   23290 main.go:134] libmachine: (stopped-upgrade-20220601110426-7337) DBG | domain stopped-upgrade-20220601110426-7337 has defined MAC address 52:54:00:f5:49:4e in network minikube-net
	I0601 11:07:52.164371   23290 main.go:134] libmachine: (stopped-upgrade-20220601110426-7337) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:49:4e", ip: ""} in network minikube-net: {Iface:virbr7 ExpiryTime:2022-06-01 12:07:44 +0000 UTC Type:0 Mac:52:54:00:f5:49:4e Iaid: IPaddr:192.168.105.209 Prefix:24 Hostname:stopped-upgrade-20220601110426-7337 Clientid:01:52:54:00:f5:49:4e}
	I0601 11:07:52.164410   23290 main.go:134] libmachine: (stopped-upgrade-20220601110426-7337) DBG | domain stopped-upgrade-20220601110426-7337 has defined IP address 192.168.105.209 and MAC address 52:54:00:f5:49:4e in network minikube-net
	I0601 11:07:52.164616   23290 main.go:134] libmachine: (stopped-upgrade-20220601110426-7337) Calling .DriverName
	I0601 11:07:52.164772   23290 main.go:134] libmachine: (stopped-upgrade-20220601110426-7337) Calling .DriverName
	I0601 11:07:52.164960   23290 main.go:134] libmachine: (stopped-upgrade-20220601110426-7337) Calling .DriverName
	I0601 11:07:52.165550   23290 main.go:134] libmachine: (stopped-upgrade-20220601110426-7337) Calling .DriverName
	I0601 11:07:52.165751   23290 main.go:134] libmachine: (stopped-upgrade-20220601110426-7337) Calling .DriverName
	I0601 11:07:52.165825   23290 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0601 11:07:52.165870   23290 main.go:134] libmachine: (stopped-upgrade-20220601110426-7337) Calling .GetSSHHostname
	I0601 11:07:52.165958   23290 ssh_runner.go:195] Run: systemctl --version
	I0601 11:07:52.165975   23290 main.go:134] libmachine: (stopped-upgrade-20220601110426-7337) Calling .GetSSHHostname
	I0601 11:07:52.169024   23290 main.go:134] libmachine: (stopped-upgrade-20220601110426-7337) DBG | domain stopped-upgrade-20220601110426-7337 has defined MAC address 52:54:00:f5:49:4e in network minikube-net
	I0601 11:07:52.169480   23290 main.go:134] libmachine: (stopped-upgrade-20220601110426-7337) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:49:4e", ip: ""} in network minikube-net: {Iface:virbr7 ExpiryTime:2022-06-01 12:07:44 +0000 UTC Type:0 Mac:52:54:00:f5:49:4e Iaid: IPaddr:192.168.105.209 Prefix:24 Hostname:stopped-upgrade-20220601110426-7337 Clientid:01:52:54:00:f5:49:4e}
	I0601 11:07:52.169548   23290 main.go:134] libmachine: (stopped-upgrade-20220601110426-7337) DBG | domain stopped-upgrade-20220601110426-7337 has defined IP address 192.168.105.209 and MAC address 52:54:00:f5:49:4e in network minikube-net
	I0601 11:07:52.169672   23290 main.go:134] libmachine: (stopped-upgrade-20220601110426-7337) DBG | domain stopped-upgrade-20220601110426-7337 has defined MAC address 52:54:00:f5:49:4e in network minikube-net
	I0601 11:07:52.169954   23290 main.go:134] libmachine: (stopped-upgrade-20220601110426-7337) Calling .GetSSHPort
	I0601 11:07:52.170129   23290 main.go:134] libmachine: (stopped-upgrade-20220601110426-7337) Calling .GetSSHKeyPath
	I0601 11:07:52.170304   23290 main.go:134] libmachine: (stopped-upgrade-20220601110426-7337) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:49:4e", ip: ""} in network minikube-net: {Iface:virbr7 ExpiryTime:2022-06-01 12:07:44 +0000 UTC Type:0 Mac:52:54:00:f5:49:4e Iaid: IPaddr:192.168.105.209 Prefix:24 Hostname:stopped-upgrade-20220601110426-7337 Clientid:01:52:54:00:f5:49:4e}
	I0601 11:07:52.170330   23290 main.go:134] libmachine: (stopped-upgrade-20220601110426-7337) DBG | domain stopped-upgrade-20220601110426-7337 has defined IP address 192.168.105.209 and MAC address 52:54:00:f5:49:4e in network minikube-net
	I0601 11:07:52.170385   23290 main.go:134] libmachine: (stopped-upgrade-20220601110426-7337) Calling .GetSSHPort
	I0601 11:07:52.170437   23290 main.go:134] libmachine: (stopped-upgrade-20220601110426-7337) Calling .GetSSHUsername
	I0601 11:07:52.170517   23290 main.go:134] libmachine: (stopped-upgrade-20220601110426-7337) Calling .GetSSHKeyPath
	I0601 11:07:52.170625   23290 sshutil.go:53] new ssh client: &{IP:192.168.105.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/stopped-upgrade-20220601110426-7337/id_rsa Username:docker}
	I0601 11:07:52.170654   23290 main.go:134] libmachine: (stopped-upgrade-20220601110426-7337) Calling .GetSSHUsername
	I0601 11:07:52.170778   23290 sshutil.go:53] new ssh client: &{IP:192.168.105.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/stopped-upgrade-20220601110426-7337/id_rsa Username:docker}
	I0601 11:07:52.276852   23290 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0601 11:07:52.276972   23290 ssh_runner.go:195] Run: sudo crictl images --output json
	I0601 11:07:56.291949   23290 ssh_runner.go:235] Completed: sudo crictl images --output json: (4.014952493s)
	I0601 11:07:56.292106   23290 containerd.go:543] couldn't find preloaded image for "k8s.gcr.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0601 11:07:56.292167   23290 ssh_runner.go:195] Run: which lz4
	I0601 11:07:56.296713   23290 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0601 11:07:56.301737   23290 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/preloaded.tar.lz4': No such file or directory
	I0601 11:07:56.301767   23290 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (472503869 bytes)
	I0601 11:07:57.736032   23290 containerd.go:490] Took 1.439340 seconds to copy over tarball
	I0601 11:07:57.736104   23290 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0601 11:08:01.462934   23290 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.726802072s)
	I0601 11:08:01.462963   23290 containerd.go:497] Took 3.726906 seconds t extract the tarball
	I0601 11:08:01.462974   23290 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0601 11:08:01.498579   23290 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0601 11:08:01.632927   23290 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0601 11:08:01.675302   23290 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0601 11:08:01.716761   23290 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0601 11:08:01.729163   23290 docker.go:187] disabling docker service ...
	I0601 11:08:01.729230   23290 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0601 11:08:01.738936   23290 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0601 11:08:01.747551   23290 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0601 11:08:01.891059   23290 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0601 11:08:02.006038   23290 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0601 11:08:02.018586   23290 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0601 11:08:02.030138   23290 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*sandbox_image = .*$|sandbox_image = "k8s.gcr.io/pause:3.2"|' -i /etc/containerd/config.toml"
	I0601 11:08:02.037403   23290 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*restrict_oom_score_adj = .*$|restrict_oom_score_adj = false|' -i /etc/containerd/config.toml"
	I0601 11:08:02.044989   23290 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*SystemdCgroup = .*$|SystemdCgroup = false|' -i /etc/containerd/config.toml"
	I0601 11:08:02.052169   23290 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*conf_dir = .*$|conf_dir = "/etc/cni/net.d"|' -i /etc/containerd/config.toml"
	I0601 11:08:02.059423   23290 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^# imports|imports = ["/etc/containerd/containerd.conf.d/02-containerd.conf"]|' -i /etc/containerd/config.toml"
	I0601 11:08:02.066628   23290 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd/containerd.conf.d && printf %s "dmVyc2lvbiA9IDIK" | base64 -d | sudo tee /etc/containerd/containerd.conf.d/02-containerd.conf"
	I0601 11:08:02.082980   23290 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0601 11:08:02.090525   23290 crio.go:137] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0601 11:08:02.090581   23290 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0601 11:08:02.103572   23290 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0601 11:08:02.109834   23290 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0601 11:08:02.375093   23290 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0601 11:08:04.442312   23290 ssh_runner.go:235] Completed: sudo systemctl restart containerd: (2.067181794s)
	I0601 11:08:04.442344   23290 start.go:447] Will wait 60s for socket path /run/containerd/containerd.sock
	I0601 11:08:04.442400   23290 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0601 11:08:04.450047   23290 retry.go:31] will retry after 1.104660288s: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/run/containerd/containerd.sock': No such file or directory
	I0601 11:08:05.554931   23290 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0601 11:08:05.561244   23290 start.go:468] Will wait 60s for crictl version
	I0601 11:08:05.561327   23290 ssh_runner.go:195] Run: sudo crictl version
	I0601 11:08:05.580453   23290 retry.go:31] will retry after 14.405090881s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2022-06-01T11:08:05Z" level=fatal msg="getting the runtime version: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	I0601 11:08:19.986382   23290 ssh_runner.go:195] Run: sudo crictl version
	I0601 11:08:20.000433   23290 retry.go:31] will retry after 17.468400798s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2022-06-01T11:08:19Z" level=fatal msg="getting the runtime version: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	I0601 11:08:37.469997   23290 ssh_runner.go:195] Run: sudo crictl version
	I0601 11:08:37.487113   23290 retry.go:31] will retry after 21.098569212s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2022-06-01T11:08:37Z" level=fatal msg="getting the runtime version: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	I0601 11:08:58.586038   23290 ssh_runner.go:195] Run: sudo crictl version
	I0601 11:08:58.607813   23290 out.go:177] 
	W0601 11:08:58.609452   23290 out.go:239] X Exiting due to RUNTIME_ENABLE: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2022-06-01T11:08:58Z" level=fatal msg="getting the runtime version: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	
	X Exiting due to RUNTIME_ENABLE: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2022-06-01T11:08:58Z" level=fatal msg="getting the runtime version: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
	
	W0601 11:08:58.609478   23290 out.go:239] * 
	* 
	W0601 11:08:58.610793   23290 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0601 11:08:58.612474   23290 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:207: upgrade from v1.16.0 to HEAD failed: out/minikube-linux-amd64 start -p stopped-upgrade-20220601110426-7337 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: exit status 90
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (271.46s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (43.13s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-20220601110620-7337 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd

                                                
                                                
=== CONT  TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-20220601110620-7337 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (38.645453621s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-20220601110620-7337] minikube v1.26.0-beta.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=14079
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Using the kvm2 driver based on existing profile
	* Starting control plane node pause-20220601110620-7337 in cluster pause-20220601110620-7337
	* Updating the running kvm2 "pause-20220601110620-7337" VM ...
	* Preparing Kubernetes v1.23.6 on containerd 1.6.4 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: default-storageclass, storage-provisioner
	* Done! kubectl is now configured to use "pause-20220601110620-7337" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I0601 11:07:55.037375   23498 out.go:296] Setting OutFile to fd 1 ...
	I0601 11:07:55.037493   23498 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:07:55.037503   23498 out.go:309] Setting ErrFile to fd 2...
	I0601 11:07:55.037514   23498 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:07:55.037625   23498 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/bin
	I0601 11:07:55.037847   23498 out.go:303] Setting JSON to false
	I0601 11:07:55.038705   23498 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3029,"bootTime":1654078646,"procs":261,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1027-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0601 11:07:55.038766   23498 start.go:125] virtualization: kvm guest
	I0601 11:07:55.041618   23498 out.go:177] * [pause-20220601110620-7337] minikube v1.26.0-beta.1 on Ubuntu 20.04 (kvm/amd64)
	I0601 11:07:55.043088   23498 notify.go:193] Checking for updates...
	I0601 11:07:55.043099   23498 out.go:177]   - MINIKUBE_LOCATION=14079
	I0601 11:07:55.044669   23498 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0601 11:07:55.046043   23498 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 11:07:55.048119   23498 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube
	I0601 11:07:55.049633   23498 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0601 11:07:55.051185   23498 config.go:178] Loaded profile config "pause-20220601110620-7337": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0601 11:07:55.051577   23498 main.go:134] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0601 11:07:55.051622   23498 main.go:134] libmachine: Launching plugin server for driver kvm2
	I0601 11:07:55.065821   23498 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:36491
	I0601 11:07:55.066231   23498 main.go:134] libmachine: () Calling .GetVersion
	I0601 11:07:55.066737   23498 main.go:134] libmachine: Using API Version  1
	I0601 11:07:55.066765   23498 main.go:134] libmachine: () Calling .SetConfigRaw
	I0601 11:07:55.067131   23498 main.go:134] libmachine: () Calling .GetMachineName
	I0601 11:07:55.067304   23498 main.go:134] libmachine: (pause-20220601110620-7337) Calling .DriverName
	I0601 11:07:55.067480   23498 driver.go:358] Setting default libvirt URI to qemu:///system
	I0601 11:07:55.067805   23498 main.go:134] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0601 11:07:55.067841   23498 main.go:134] libmachine: Launching plugin server for driver kvm2
	I0601 11:07:55.081963   23498 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:42245
	I0601 11:07:55.082325   23498 main.go:134] libmachine: () Calling .GetVersion
	I0601 11:07:55.082728   23498 main.go:134] libmachine: Using API Version  1
	I0601 11:07:55.082753   23498 main.go:134] libmachine: () Calling .SetConfigRaw
	I0601 11:07:55.083019   23498 main.go:134] libmachine: () Calling .GetMachineName
	I0601 11:07:55.083183   23498 main.go:134] libmachine: (pause-20220601110620-7337) Calling .DriverName
	I0601 11:07:55.116498   23498 out.go:177] * Using the kvm2 driver based on existing profile
	I0601 11:07:55.118026   23498 start.go:284] selected driver: kvm2
	I0601 11:07:55.118047   23498 start.go:806] validating driver "kvm2" against &{Name:pause-20220601110620-7337 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/13807/minikube-v1.26.0-1653677468-13807-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernet
esConfig:{KubernetesVersion:v1.23.6 ClusterName:pause-20220601110620-7337 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.64 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Moun
tGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 11:07:55.118181   23498 start.go:817] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0601 11:07:55.118521   23498 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 11:07:55.118779   23498 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0601 11:07:55.132294   23498 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.26.0-beta.1
	I0601 11:07:55.133025   23498 cni.go:95] Creating CNI manager for ""
	I0601 11:07:55.133047   23498 cni.go:165] "kvm2" driver + containerd runtime found, recommending bridge
	I0601 11:07:55.133059   23498 start_flags.go:306] config:
	{Name:pause-20220601110620-7337 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/13807/minikube-v1.26.0-1653677468-13807-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:pause-20220601110620-7337 Name
space:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.64 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 11:07:55.133202   23498 iso.go:128] acquiring lock: {Name:mkad95a9aa9919c9e63cafd3e91a2bd2bcafb74e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 11:07:55.135351   23498 out.go:177] * Starting control plane node pause-20220601110620-7337 in cluster pause-20220601110620-7337
	I0601 11:07:55.136851   23498 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime containerd
	I0601 11:07:55.136907   23498 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-containerd-overlay2-amd64.tar.lz4
	I0601 11:07:55.136924   23498 cache.go:57] Caching tarball of preloaded images
	I0601 11:07:55.137009   23498 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0601 11:07:55.137030   23498 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on containerd
	I0601 11:07:55.137126   23498 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/pause-20220601110620-7337/config.json ...
	I0601 11:07:55.137308   23498 cache.go:206] Successfully downloaded all kic artifacts
	I0601 11:07:55.137332   23498 start.go:352] acquiring machines lock for pause-20220601110620-7337: {Name:mk996831bcc8315bce9654ddce127329929e96ab Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0601 11:07:55.137399   23498 start.go:356] acquired machines lock for "pause-20220601110620-7337" in 50.225µs
	I0601 11:07:55.137414   23498 start.go:94] Skipping create...Using existing machine configuration
	I0601 11:07:55.137424   23498 fix.go:55] fixHost starting: 
	I0601 11:07:55.137736   23498 main.go:134] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0601 11:07:55.137776   23498 main.go:134] libmachine: Launching plugin server for driver kvm2
	I0601 11:07:55.151942   23498 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:34291
	I0601 11:07:55.152348   23498 main.go:134] libmachine: () Calling .GetVersion
	I0601 11:07:55.152802   23498 main.go:134] libmachine: Using API Version  1
	I0601 11:07:55.152826   23498 main.go:134] libmachine: () Calling .SetConfigRaw
	I0601 11:07:55.153151   23498 main.go:134] libmachine: () Calling .GetMachineName
	I0601 11:07:55.153337   23498 main.go:134] libmachine: (pause-20220601110620-7337) Calling .DriverName
	I0601 11:07:55.153478   23498 main.go:134] libmachine: (pause-20220601110620-7337) Calling .GetState
	I0601 11:07:55.155039   23498 fix.go:103] recreateIfNeeded on pause-20220601110620-7337: state=Running err=<nil>
	W0601 11:07:55.155073   23498 fix.go:129] unexpected machine state, will restart: <nil>
	I0601 11:07:55.157216   23498 out.go:177] * Updating the running kvm2 "pause-20220601110620-7337" VM ...
	I0601 11:07:55.158523   23498 machine.go:88] provisioning docker machine ...
	I0601 11:07:55.158543   23498 main.go:134] libmachine: (pause-20220601110620-7337) Calling .DriverName
	I0601 11:07:55.158738   23498 main.go:134] libmachine: (pause-20220601110620-7337) Calling .GetMachineName
	I0601 11:07:55.158889   23498 buildroot.go:166] provisioning hostname "pause-20220601110620-7337"
	I0601 11:07:55.158910   23498 main.go:134] libmachine: (pause-20220601110620-7337) Calling .GetMachineName
	I0601 11:07:55.159056   23498 main.go:134] libmachine: (pause-20220601110620-7337) Calling .GetSSHHostname
	I0601 11:07:55.161255   23498 main.go:134] libmachine: (pause-20220601110620-7337) DBG | domain pause-20220601110620-7337 has defined MAC address 52:54:00:40:c6:ea in network mk-pause-20220601110620-7337
	I0601 11:07:55.161648   23498 main.go:134] libmachine: (pause-20220601110620-7337) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:c6:ea", ip: ""} in network mk-pause-20220601110620-7337: {Iface:virbr5 ExpiryTime:2022-06-01 12:06:42 +0000 UTC Type:0 Mac:52:54:00:40:c6:ea Iaid: IPaddr:192.168.50.64 Prefix:24 Hostname:pause-20220601110620-7337 Clientid:01:52:54:00:40:c6:ea}
	I0601 11:07:55.161683   23498 main.go:134] libmachine: (pause-20220601110620-7337) DBG | domain pause-20220601110620-7337 has defined IP address 192.168.50.64 and MAC address 52:54:00:40:c6:ea in network mk-pause-20220601110620-7337
	I0601 11:07:55.161846   23498 main.go:134] libmachine: (pause-20220601110620-7337) Calling .GetSSHPort
	I0601 11:07:55.162036   23498 main.go:134] libmachine: (pause-20220601110620-7337) Calling .GetSSHKeyPath
	I0601 11:07:55.162248   23498 main.go:134] libmachine: (pause-20220601110620-7337) Calling .GetSSHKeyPath
	I0601 11:07:55.162355   23498 main.go:134] libmachine: (pause-20220601110620-7337) Calling .GetSSHUsername
	I0601 11:07:55.162501   23498 main.go:134] libmachine: Using SSH client type: native
	I0601 11:07:55.162646   23498 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da240] 0x7dd2a0 <nil>  [] 0s} 192.168.50.64 22 <nil> <nil>}
	I0601 11:07:55.162662   23498 main.go:134] libmachine: About to run SSH command:
	sudo hostname pause-20220601110620-7337 && echo "pause-20220601110620-7337" | sudo tee /etc/hostname
	I0601 11:07:55.283621   23498 main.go:134] libmachine: SSH cmd err, output: <nil>: pause-20220601110620-7337
	
	I0601 11:07:55.283661   23498 main.go:134] libmachine: (pause-20220601110620-7337) Calling .GetSSHHostname
	I0601 11:07:55.286387   23498 main.go:134] libmachine: (pause-20220601110620-7337) DBG | domain pause-20220601110620-7337 has defined MAC address 52:54:00:40:c6:ea in network mk-pause-20220601110620-7337
	I0601 11:07:55.286771   23498 main.go:134] libmachine: (pause-20220601110620-7337) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:c6:ea", ip: ""} in network mk-pause-20220601110620-7337: {Iface:virbr5 ExpiryTime:2022-06-01 12:06:42 +0000 UTC Type:0 Mac:52:54:00:40:c6:ea Iaid: IPaddr:192.168.50.64 Prefix:24 Hostname:pause-20220601110620-7337 Clientid:01:52:54:00:40:c6:ea}
	I0601 11:07:55.286835   23498 main.go:134] libmachine: (pause-20220601110620-7337) DBG | domain pause-20220601110620-7337 has defined IP address 192.168.50.64 and MAC address 52:54:00:40:c6:ea in network mk-pause-20220601110620-7337
	I0601 11:07:55.286927   23498 main.go:134] libmachine: (pause-20220601110620-7337) Calling .GetSSHPort
	I0601 11:07:55.287148   23498 main.go:134] libmachine: (pause-20220601110620-7337) Calling .GetSSHKeyPath
	I0601 11:07:55.287331   23498 main.go:134] libmachine: (pause-20220601110620-7337) Calling .GetSSHKeyPath
	I0601 11:07:55.287456   23498 main.go:134] libmachine: (pause-20220601110620-7337) Calling .GetSSHUsername
	I0601 11:07:55.287654   23498 main.go:134] libmachine: Using SSH client type: native
	I0601 11:07:55.287791   23498 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da240] 0x7dd2a0 <nil>  [] 0s} 192.168.50.64 22 <nil> <nil>}
	I0601 11:07:55.287811   23498 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-20220601110620-7337' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-20220601110620-7337/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-20220601110620-7337' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0601 11:07:55.400842   23498 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0601 11:07:55.400874   23498 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/key.
pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube}
	I0601 11:07:55.400926   23498 buildroot.go:174] setting up certificates
	I0601 11:07:55.400941   23498 provision.go:83] configureAuth start
	I0601 11:07:55.400959   23498 main.go:134] libmachine: (pause-20220601110620-7337) Calling .GetMachineName
	I0601 11:07:55.401204   23498 main.go:134] libmachine: (pause-20220601110620-7337) Calling .GetIP
	I0601 11:07:55.403889   23498 main.go:134] libmachine: (pause-20220601110620-7337) DBG | domain pause-20220601110620-7337 has defined MAC address 52:54:00:40:c6:ea in network mk-pause-20220601110620-7337
	I0601 11:07:55.404245   23498 main.go:134] libmachine: (pause-20220601110620-7337) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:c6:ea", ip: ""} in network mk-pause-20220601110620-7337: {Iface:virbr5 ExpiryTime:2022-06-01 12:06:42 +0000 UTC Type:0 Mac:52:54:00:40:c6:ea Iaid: IPaddr:192.168.50.64 Prefix:24 Hostname:pause-20220601110620-7337 Clientid:01:52:54:00:40:c6:ea}
	I0601 11:07:55.404281   23498 main.go:134] libmachine: (pause-20220601110620-7337) DBG | domain pause-20220601110620-7337 has defined IP address 192.168.50.64 and MAC address 52:54:00:40:c6:ea in network mk-pause-20220601110620-7337
	I0601 11:07:55.404463   23498 main.go:134] libmachine: (pause-20220601110620-7337) Calling .GetSSHHostname
	I0601 11:07:55.406996   23498 main.go:134] libmachine: (pause-20220601110620-7337) DBG | domain pause-20220601110620-7337 has defined MAC address 52:54:00:40:c6:ea in network mk-pause-20220601110620-7337
	I0601 11:07:55.407347   23498 main.go:134] libmachine: (pause-20220601110620-7337) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:c6:ea", ip: ""} in network mk-pause-20220601110620-7337: {Iface:virbr5 ExpiryTime:2022-06-01 12:06:42 +0000 UTC Type:0 Mac:52:54:00:40:c6:ea Iaid: IPaddr:192.168.50.64 Prefix:24 Hostname:pause-20220601110620-7337 Clientid:01:52:54:00:40:c6:ea}
	I0601 11:07:55.407390   23498 main.go:134] libmachine: (pause-20220601110620-7337) DBG | domain pause-20220601110620-7337 has defined IP address 192.168.50.64 and MAC address 52:54:00:40:c6:ea in network mk-pause-20220601110620-7337
	I0601 11:07:55.407557   23498 provision.go:138] copyHostCerts
	I0601 11:07:55.407612   23498 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem, removing ...
	I0601 11:07:55.407627   23498 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem
	I0601 11:07:55.407670   23498 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.pem (1082 bytes)
	I0601 11:07:55.407760   23498 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem, removing ...
	I0601 11:07:55.407776   23498 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem
	I0601 11:07:55.407798   23498 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cert.pem (1123 bytes)
	I0601 11:07:55.407854   23498 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem, removing ...
	I0601 11:07:55.407863   23498 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem
	I0601 11:07:55.407881   23498 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/key.pem (1675 bytes)
	I0601 11:07:55.407937   23498 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem org=jenkins.pause-20220601110620-7337 san=[192.168.50.64 192.168.50.64 localhost 127.0.0.1 minikube pause-20220601110620-7337]
	I0601 11:07:55.606054   23498 provision.go:172] copyRemoteCerts
	I0601 11:07:55.606107   23498 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0601 11:07:55.606130   23498 main.go:134] libmachine: (pause-20220601110620-7337) Calling .GetSSHHostname
	I0601 11:07:55.608651   23498 main.go:134] libmachine: (pause-20220601110620-7337) DBG | domain pause-20220601110620-7337 has defined MAC address 52:54:00:40:c6:ea in network mk-pause-20220601110620-7337
	I0601 11:07:55.608958   23498 main.go:134] libmachine: (pause-20220601110620-7337) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:c6:ea", ip: ""} in network mk-pause-20220601110620-7337: {Iface:virbr5 ExpiryTime:2022-06-01 12:06:42 +0000 UTC Type:0 Mac:52:54:00:40:c6:ea Iaid: IPaddr:192.168.50.64 Prefix:24 Hostname:pause-20220601110620-7337 Clientid:01:52:54:00:40:c6:ea}
	I0601 11:07:55.608983   23498 main.go:134] libmachine: (pause-20220601110620-7337) DBG | domain pause-20220601110620-7337 has defined IP address 192.168.50.64 and MAC address 52:54:00:40:c6:ea in network mk-pause-20220601110620-7337
	I0601 11:07:55.609178   23498 main.go:134] libmachine: (pause-20220601110620-7337) Calling .GetSSHPort
	I0601 11:07:55.609361   23498 main.go:134] libmachine: (pause-20220601110620-7337) Calling .GetSSHKeyPath
	I0601 11:07:55.609528   23498 main.go:134] libmachine: (pause-20220601110620-7337) Calling .GetSSHUsername
	I0601 11:07:55.609677   23498 sshutil.go:53] new ssh client: &{IP:192.168.50.64 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/pause-20220601110620-7337/id_rsa Username:docker}
	I0601 11:07:55.693426   23498 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0601 11:07:55.715142   23498 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0601 11:07:55.736482   23498 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0601 11:07:55.757582   23498 provision.go:86] duration metric: configureAuth took 356.628615ms
	I0601 11:07:55.757604   23498 buildroot.go:189] setting minikube options for container-runtime
	I0601 11:07:55.757792   23498 config.go:178] Loaded profile config "pause-20220601110620-7337": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0601 11:07:55.757805   23498 machine.go:91] provisioned docker machine in 599.271806ms
	I0601 11:07:55.757811   23498 start.go:306] post-start starting for "pause-20220601110620-7337" (driver="kvm2")
	I0601 11:07:55.757818   23498 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0601 11:07:55.757852   23498 main.go:134] libmachine: (pause-20220601110620-7337) Calling .DriverName
	I0601 11:07:55.758137   23498 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0601 11:07:55.758174   23498 main.go:134] libmachine: (pause-20220601110620-7337) Calling .GetSSHHostname
	I0601 11:07:55.760612   23498 main.go:134] libmachine: (pause-20220601110620-7337) DBG | domain pause-20220601110620-7337 has defined MAC address 52:54:00:40:c6:ea in network mk-pause-20220601110620-7337
	I0601 11:07:55.760966   23498 main.go:134] libmachine: (pause-20220601110620-7337) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:c6:ea", ip: ""} in network mk-pause-20220601110620-7337: {Iface:virbr5 ExpiryTime:2022-06-01 12:06:42 +0000 UTC Type:0 Mac:52:54:00:40:c6:ea Iaid: IPaddr:192.168.50.64 Prefix:24 Hostname:pause-20220601110620-7337 Clientid:01:52:54:00:40:c6:ea}
	I0601 11:07:55.761003   23498 main.go:134] libmachine: (pause-20220601110620-7337) DBG | domain pause-20220601110620-7337 has defined IP address 192.168.50.64 and MAC address 52:54:00:40:c6:ea in network mk-pause-20220601110620-7337
	I0601 11:07:55.761132   23498 main.go:134] libmachine: (pause-20220601110620-7337) Calling .GetSSHPort
	I0601 11:07:55.761319   23498 main.go:134] libmachine: (pause-20220601110620-7337) Calling .GetSSHKeyPath
	I0601 11:07:55.761484   23498 main.go:134] libmachine: (pause-20220601110620-7337) Calling .GetSSHUsername
	I0601 11:07:55.761636   23498 sshutil.go:53] new ssh client: &{IP:192.168.50.64 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/pause-20220601110620-7337/id_rsa Username:docker}
	I0601 11:07:55.845845   23498 ssh_runner.go:195] Run: cat /etc/os-release
	I0601 11:07:55.850030   23498 info.go:137] Remote host: Buildroot 2021.02.12
	I0601 11:07:55.850049   23498 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/addons for local assets ...
	I0601 11:07:55.850115   23498 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files for local assets ...
	I0601 11:07:55.850186   23498 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/73372.pem -> 73372.pem in /etc/ssl/certs
	I0601 11:07:55.850415   23498 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0601 11:07:55.859957   23498 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/73372.pem --> /etc/ssl/certs/73372.pem (1708 bytes)
	I0601 11:07:55.882336   23498 start.go:309] post-start completed in 124.514687ms
	I0601 11:07:55.882356   23498 fix.go:57] fixHost completed within 744.932828ms
	I0601 11:07:55.882376   23498 main.go:134] libmachine: (pause-20220601110620-7337) Calling .GetSSHHostname
	I0601 11:07:55.885071   23498 main.go:134] libmachine: (pause-20220601110620-7337) DBG | domain pause-20220601110620-7337 has defined MAC address 52:54:00:40:c6:ea in network mk-pause-20220601110620-7337
	I0601 11:07:55.885411   23498 main.go:134] libmachine: (pause-20220601110620-7337) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:c6:ea", ip: ""} in network mk-pause-20220601110620-7337: {Iface:virbr5 ExpiryTime:2022-06-01 12:06:42 +0000 UTC Type:0 Mac:52:54:00:40:c6:ea Iaid: IPaddr:192.168.50.64 Prefix:24 Hostname:pause-20220601110620-7337 Clientid:01:52:54:00:40:c6:ea}
	I0601 11:07:55.885448   23498 main.go:134] libmachine: (pause-20220601110620-7337) DBG | domain pause-20220601110620-7337 has defined IP address 192.168.50.64 and MAC address 52:54:00:40:c6:ea in network mk-pause-20220601110620-7337
	I0601 11:07:55.885629   23498 main.go:134] libmachine: (pause-20220601110620-7337) Calling .GetSSHPort
	I0601 11:07:55.885829   23498 main.go:134] libmachine: (pause-20220601110620-7337) Calling .GetSSHKeyPath
	I0601 11:07:55.886012   23498 main.go:134] libmachine: (pause-20220601110620-7337) Calling .GetSSHKeyPath
	I0601 11:07:55.886141   23498 main.go:134] libmachine: (pause-20220601110620-7337) Calling .GetSSHUsername
	I0601 11:07:55.886286   23498 main.go:134] libmachine: Using SSH client type: native
	I0601 11:07:55.886428   23498 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7da240] 0x7dd2a0 <nil>  [] 0s} 192.168.50.64 22 <nil> <nil>}
	I0601 11:07:55.886441   23498 main.go:134] libmachine: About to run SSH command:
	date +%s.%N
	I0601 11:07:56.000192   23498 main.go:134] libmachine: SSH cmd err, output: <nil>: 1654081675.994877459
	
	I0601 11:07:56.000224   23498 fix.go:207] guest clock: 1654081675.994877459
	I0601 11:07:56.000237   23498 fix.go:220] Guest: 2022-06-01 11:07:55.994877459 +0000 UTC Remote: 2022-06-01 11:07:55.882359801 +0000 UTC m=+0.899248965 (delta=112.517658ms)
	I0601 11:07:56.000260   23498 fix.go:191] guest clock delta is within tolerance: 112.517658ms
	I0601 11:07:56.000267   23498 start.go:81] releasing machines lock for "pause-20220601110620-7337", held for 862.857122ms
	I0601 11:07:56.000315   23498 main.go:134] libmachine: (pause-20220601110620-7337) Calling .DriverName
	I0601 11:07:56.000587   23498 main.go:134] libmachine: (pause-20220601110620-7337) Calling .GetIP
	I0601 11:07:56.002962   23498 main.go:134] libmachine: (pause-20220601110620-7337) DBG | domain pause-20220601110620-7337 has defined MAC address 52:54:00:40:c6:ea in network mk-pause-20220601110620-7337
	I0601 11:07:56.003326   23498 main.go:134] libmachine: (pause-20220601110620-7337) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:c6:ea", ip: ""} in network mk-pause-20220601110620-7337: {Iface:virbr5 ExpiryTime:2022-06-01 12:06:42 +0000 UTC Type:0 Mac:52:54:00:40:c6:ea Iaid: IPaddr:192.168.50.64 Prefix:24 Hostname:pause-20220601110620-7337 Clientid:01:52:54:00:40:c6:ea}
	I0601 11:07:56.003363   23498 main.go:134] libmachine: (pause-20220601110620-7337) DBG | domain pause-20220601110620-7337 has defined IP address 192.168.50.64 and MAC address 52:54:00:40:c6:ea in network mk-pause-20220601110620-7337
	I0601 11:07:56.003486   23498 main.go:134] libmachine: (pause-20220601110620-7337) Calling .DriverName
	I0601 11:07:56.003665   23498 main.go:134] libmachine: (pause-20220601110620-7337) Calling .DriverName
	I0601 11:07:56.003819   23498 main.go:134] libmachine: (pause-20220601110620-7337) Calling .DriverName
	I0601 11:07:56.004214   23498 main.go:134] libmachine: (pause-20220601110620-7337) Calling .DriverName
	I0601 11:07:56.004397   23498 main.go:134] libmachine: (pause-20220601110620-7337) Calling .DriverName
	I0601 11:07:56.004486   23498 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0601 11:07:56.004533   23498 main.go:134] libmachine: (pause-20220601110620-7337) Calling .GetSSHHostname
	I0601 11:07:56.004690   23498 ssh_runner.go:195] Run: systemctl --version
	I0601 11:07:56.004719   23498 main.go:134] libmachine: (pause-20220601110620-7337) Calling .GetSSHHostname
	I0601 11:07:56.007249   23498 main.go:134] libmachine: (pause-20220601110620-7337) DBG | domain pause-20220601110620-7337 has defined MAC address 52:54:00:40:c6:ea in network mk-pause-20220601110620-7337
	I0601 11:07:56.007490   23498 main.go:134] libmachine: (pause-20220601110620-7337) DBG | domain pause-20220601110620-7337 has defined MAC address 52:54:00:40:c6:ea in network mk-pause-20220601110620-7337
	I0601 11:07:56.007697   23498 main.go:134] libmachine: (pause-20220601110620-7337) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:c6:ea", ip: ""} in network mk-pause-20220601110620-7337: {Iface:virbr5 ExpiryTime:2022-06-01 12:06:42 +0000 UTC Type:0 Mac:52:54:00:40:c6:ea Iaid: IPaddr:192.168.50.64 Prefix:24 Hostname:pause-20220601110620-7337 Clientid:01:52:54:00:40:c6:ea}
	I0601 11:07:56.007734   23498 main.go:134] libmachine: (pause-20220601110620-7337) DBG | domain pause-20220601110620-7337 has defined IP address 192.168.50.64 and MAC address 52:54:00:40:c6:ea in network mk-pause-20220601110620-7337
	I0601 11:07:56.007880   23498 main.go:134] libmachine: (pause-20220601110620-7337) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:c6:ea", ip: ""} in network mk-pause-20220601110620-7337: {Iface:virbr5 ExpiryTime:2022-06-01 12:06:42 +0000 UTC Type:0 Mac:52:54:00:40:c6:ea Iaid: IPaddr:192.168.50.64 Prefix:24 Hostname:pause-20220601110620-7337 Clientid:01:52:54:00:40:c6:ea}
	I0601 11:07:56.007902   23498 main.go:134] libmachine: (pause-20220601110620-7337) Calling .GetSSHPort
	I0601 11:07:56.007919   23498 main.go:134] libmachine: (pause-20220601110620-7337) DBG | domain pause-20220601110620-7337 has defined IP address 192.168.50.64 and MAC address 52:54:00:40:c6:ea in network mk-pause-20220601110620-7337
	I0601 11:07:56.008023   23498 main.go:134] libmachine: (pause-20220601110620-7337) Calling .GetSSHPort
	I0601 11:07:56.008130   23498 main.go:134] libmachine: (pause-20220601110620-7337) Calling .GetSSHKeyPath
	I0601 11:07:56.008283   23498 main.go:134] libmachine: (pause-20220601110620-7337) Calling .GetSSHKeyPath
	I0601 11:07:56.008374   23498 main.go:134] libmachine: (pause-20220601110620-7337) Calling .GetSSHUsername
	I0601 11:07:56.008444   23498 main.go:134] libmachine: (pause-20220601110620-7337) Calling .GetSSHUsername
	I0601 11:07:56.008505   23498 sshutil.go:53] new ssh client: &{IP:192.168.50.64 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/pause-20220601110620-7337/id_rsa Username:docker}
	I0601 11:07:56.008542   23498 sshutil.go:53] new ssh client: &{IP:192.168.50.64 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/pause-20220601110620-7337/id_rsa Username:docker}
	I0601 11:07:56.108723   23498 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime containerd
	I0601 11:07:56.108806   23498 ssh_runner.go:195] Run: sudo crictl images --output json
	I0601 11:07:56.137885   23498 containerd.go:547] all images are preloaded for containerd runtime.
	I0601 11:07:56.137915   23498 containerd.go:461] Images already preloaded, skipping extraction
	I0601 11:07:56.137973   23498 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0601 11:07:56.151587   23498 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0601 11:07:56.164830   23498 docker.go:187] disabling docker service ...
	I0601 11:07:56.164878   23498 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0601 11:07:56.180143   23498 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0601 11:07:56.193682   23498 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0601 11:07:56.325474   23498 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0601 11:07:56.496372   23498 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0601 11:07:56.511554   23498 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0601 11:07:56.532264   23498 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*sandbox_image = .*$|sandbox_image = "k8s.gcr.io/pause:3.6"|' -i /etc/containerd/config.toml"
	I0601 11:07:56.544342   23498 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*restrict_oom_score_adj = .*$|restrict_oom_score_adj = false|' -i /etc/containerd/config.toml"
	I0601 11:07:56.555802   23498 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*SystemdCgroup = .*$|SystemdCgroup = false|' -i /etc/containerd/config.toml"
	I0601 11:07:56.568621   23498 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^.*conf_dir = .*$|conf_dir = "/etc/cni/net.d"|' -i /etc/containerd/config.toml"
	I0601 11:07:56.582217   23498 ssh_runner.go:195] Run: /bin/bash -c "sudo sed -e 's|^# imports|imports = ["/etc/containerd/containerd.conf.d/02-containerd.conf"]|' -i /etc/containerd/config.toml"
	I0601 11:07:56.596310   23498 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd/containerd.conf.d && printf %s "dmVyc2lvbiA9IDIK" | base64 -d | sudo tee /etc/containerd/containerd.conf.d/02-containerd.conf"
	I0601 11:07:56.616586   23498 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0601 11:07:56.628696   23498 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0601 11:07:56.640864   23498 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0601 11:07:56.798096   23498 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0601 11:07:56.836436   23498 start.go:447] Will wait 60s for socket path /run/containerd/containerd.sock
	I0601 11:07:56.836517   23498 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0601 11:07:56.843452   23498 retry.go:31] will retry after 1.104660288s: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I0601 11:07:57.948716   23498 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0601 11:07:57.954095   23498 retry.go:31] will retry after 2.160763633s: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I0601 11:08:00.115977   23498 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0601 11:08:00.122927   23498 start.go:468] Will wait 60s for crictl version
	I0601 11:08:00.122997   23498 ssh_runner.go:195] Run: sudo crictl version
	I0601 11:08:00.181525   23498 start.go:477] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.6.4
	RuntimeApiVersion:  v1alpha2
	I0601 11:08:00.181594   23498 ssh_runner.go:195] Run: containerd --version
	I0601 11:08:00.226248   23498 ssh_runner.go:195] Run: containerd --version
	I0601 11:08:00.275500   23498 out.go:177] * Preparing Kubernetes v1.23.6 on containerd 1.6.4 ...
	I0601 11:08:00.277276   23498 main.go:134] libmachine: (pause-20220601110620-7337) Calling .GetIP
	I0601 11:08:00.280742   23498 main.go:134] libmachine: (pause-20220601110620-7337) DBG | domain pause-20220601110620-7337 has defined MAC address 52:54:00:40:c6:ea in network mk-pause-20220601110620-7337
	I0601 11:08:00.281192   23498 main.go:134] libmachine: (pause-20220601110620-7337) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:c6:ea", ip: ""} in network mk-pause-20220601110620-7337: {Iface:virbr5 ExpiryTime:2022-06-01 12:06:42 +0000 UTC Type:0 Mac:52:54:00:40:c6:ea Iaid: IPaddr:192.168.50.64 Prefix:24 Hostname:pause-20220601110620-7337 Clientid:01:52:54:00:40:c6:ea}
	I0601 11:08:00.281225   23498 main.go:134] libmachine: (pause-20220601110620-7337) DBG | domain pause-20220601110620-7337 has defined IP address 192.168.50.64 and MAC address 52:54:00:40:c6:ea in network mk-pause-20220601110620-7337
	I0601 11:08:00.281503   23498 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0601 11:08:00.292045   23498 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime containerd
	I0601 11:08:00.292112   23498 ssh_runner.go:195] Run: sudo crictl images --output json
	I0601 11:08:00.336714   23498 containerd.go:547] all images are preloaded for containerd runtime.
	I0601 11:08:00.336742   23498 containerd.go:461] Images already preloaded, skipping extraction
	I0601 11:08:00.336795   23498 ssh_runner.go:195] Run: sudo crictl images --output json
	I0601 11:08:00.380730   23498 containerd.go:547] all images are preloaded for containerd runtime.
	I0601 11:08:00.380763   23498 cache_images.go:84] Images are preloaded, skipping loading
	I0601 11:08:00.380825   23498 ssh_runner.go:195] Run: sudo crictl info
	I0601 11:08:00.428577   23498 cni.go:95] Creating CNI manager for ""
	I0601 11:08:00.428605   23498 cni.go:165] "kvm2" driver + containerd runtime found, recommending bridge
	I0601 11:08:00.428618   23498 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0601 11:08:00.428633   23498 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.64 APIServerPort:8443 KubernetesVersion:v1.23.6 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-20220601110620-7337 NodeName:pause-20220601110620-7337 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.64"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.50.64 CgroupDriver:cgroupfs ClientCAFile:/va
r/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0601 11:08:00.428806   23498 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.64
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "pause-20220601110620-7337"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.64
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.64"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.6
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0601 11:08:00.428911   23498 kubeadm.go:961] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.6/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=pause-20220601110620-7337 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.64 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.6 ClusterName:pause-20220601110620-7337 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0601 11:08:00.428972   23498 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.6
	I0601 11:08:00.457182   23498 binaries.go:44] Found k8s binaries, skipping transfer
	I0601 11:08:00.457257   23498 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0601 11:08:00.474125   23498 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (540 bytes)
	I0601 11:08:00.517952   23498 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0601 11:08:00.547783   23498 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2057 bytes)
	I0601 11:08:00.588435   23498 ssh_runner.go:195] Run: grep 192.168.50.64	control-plane.minikube.internal$ /etc/hosts
	I0601 11:08:00.593477   23498 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/pause-20220601110620-7337 for IP: 192.168.50.64
	I0601 11:08:00.593644   23498 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.key
	I0601 11:08:00.593725   23498 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.key
	I0601 11:08:00.593842   23498 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/pause-20220601110620-7337/client.key
	I0601 11:08:00.593905   23498 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/pause-20220601110620-7337/apiserver.key.c3c735f8
	I0601 11:08:00.593956   23498 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/pause-20220601110620-7337/proxy-client.key
	I0601 11:08:00.594088   23498 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/7337.pem (1338 bytes)
	W0601 11:08:00.594129   23498 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/7337_empty.pem, impossibly tiny 0 bytes
	I0601 11:08:00.594146   23498 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca-key.pem (1679 bytes)
	I0601 11:08:00.594180   23498 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/ca.pem (1082 bytes)
	I0601 11:08:00.594214   23498 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/cert.pem (1123 bytes)
	I0601 11:08:00.594253   23498 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/key.pem (1675 bytes)
	I0601 11:08:00.594325   23498 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/73372.pem (1708 bytes)
	I0601 11:08:00.595057   23498 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/pause-20220601110620-7337/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0601 11:08:00.630480   23498 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/pause-20220601110620-7337/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0601 11:08:00.676425   23498 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/pause-20220601110620-7337/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0601 11:08:00.719010   23498 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/pause-20220601110620-7337/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0601 11:08:00.758025   23498 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0601 11:08:00.789272   23498 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0601 11:08:00.820054   23498 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0601 11:08:00.848649   23498 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0601 11:08:00.879862   23498 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0601 11:08:00.916099   23498 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/certs/7337.pem --> /usr/share/ca-certificates/7337.pem (1338 bytes)
	I0601 11:08:00.952486   23498 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/ssl/certs/73372.pem --> /usr/share/ca-certificates/73372.pem (1708 bytes)
	I0601 11:08:00.985322   23498 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0601 11:08:01.005248   23498 ssh_runner.go:195] Run: openssl version
	I0601 11:08:01.013133   23498 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0601 11:08:01.027132   23498 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0601 11:08:01.034579   23498 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Jun  1 10:20 /usr/share/ca-certificates/minikubeCA.pem
	I0601 11:08:01.034636   23498 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0601 11:08:01.042174   23498 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0601 11:08:01.055155   23498 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7337.pem && ln -fs /usr/share/ca-certificates/7337.pem /etc/ssl/certs/7337.pem"
	I0601 11:08:01.068617   23498 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7337.pem
	I0601 11:08:01.075378   23498 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Jun  1 10:26 /usr/share/ca-certificates/7337.pem
	I0601 11:08:01.075479   23498 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7337.pem
	I0601 11:08:01.081314   23498 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7337.pem /etc/ssl/certs/51391683.0"
	I0601 11:08:01.093782   23498 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/73372.pem && ln -fs /usr/share/ca-certificates/73372.pem /etc/ssl/certs/73372.pem"
	I0601 11:08:01.110287   23498 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/73372.pem
	I0601 11:08:01.116829   23498 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Jun  1 10:26 /usr/share/ca-certificates/73372.pem
	I0601 11:08:01.116908   23498 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/73372.pem
	I0601 11:08:01.124323   23498 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/73372.pem /etc/ssl/certs/3ec20f2e.0"
	I0601 11:08:01.136263   23498 kubeadm.go:395] StartCluster: {Name:pause-20220601110620-7337 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/13807/minikube-v1.26.0-1653677468-13807-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.23.6 ClusterName:pause-20220601110620-7337 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.64 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 11:08:01.136356   23498 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0601 11:08:01.136413   23498 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0601 11:08:01.173321   23498 cri.go:87] found id: "210d0cb9ee945ff5c4e7df4bfb8670d8d1737c543cb60322309d344491b21cb2"
	I0601 11:08:01.173356   23498 cri.go:87] found id: "6e9d7e184abed4789b1f1d5e9279f2e6e10c04b7c1f2c361b24609a47937900c"
	I0601 11:08:01.173366   23498 cri.go:87] found id: "cd261e2d2ff3b0de0b3fe0411dec2110ca530014dcdc702e0acd927e9d6fd7f8"
	I0601 11:08:01.173374   23498 cri.go:87] found id: "5b9eea7e9f630b4f732f8810f7ecbfacf550b07152d3c2ec94cb2a7d2f311190"
	I0601 11:08:01.173384   23498 cri.go:87] found id: "ad8fa72d1866ad4c1dc86626944739f7227b699d591b5ed6f510390f961b1dd0"
	I0601 11:08:01.173393   23498 cri.go:87] found id: "56b11b45ac07d6beafde3aaf8283c976fc2b48fe111f9be2f406a2c0d0a3009b"
	I0601 11:08:01.173402   23498 cri.go:87] found id: ""
	I0601 11:08:01.173457   23498 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0601 11:08:01.192153   23498 cri.go:114] JSON = null
	W0601 11:08:01.192192   23498 kubeadm.go:402] unpause failed: list paused: list returned 0 containers, but ps returned 6
	I0601 11:08:01.192277   23498 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0601 11:08:01.215489   23498 kubeadm.go:410] found existing configuration files, will attempt cluster restart
	I0601 11:08:01.215534   23498 kubeadm.go:626] restartCluster start
	I0601 11:08:01.215590   23498 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0601 11:08:01.230991   23498 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:08:01.231863   23498 kubeconfig.go:92] found "pause-20220601110620-7337" server: "https://192.168.50.64:8443"
	I0601 11:08:01.232699   23498 kapi.go:59] client config for pause-20220601110620-7337: &rest.Config{Host:"https://192.168.50.64:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/pause-20220601110620-7337/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/pause-20220601110620
-7337/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x17122e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0601 11:08:01.233363   23498 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0601 11:08:01.246622   23498 api_server.go:165] Checking apiserver status ...
	I0601 11:08:01.246688   23498 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:08:01.263045   23498 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:08:01.463488   23498 api_server.go:165] Checking apiserver status ...
	I0601 11:08:01.463577   23498 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:08:01.478261   23498 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:08:01.663335   23498 api_server.go:165] Checking apiserver status ...
	I0601 11:08:01.663411   23498 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:08:01.678420   23498 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:08:01.863760   23498 api_server.go:165] Checking apiserver status ...
	I0601 11:08:01.863833   23498 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:08:01.875532   23498 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:08:02.063837   23498 api_server.go:165] Checking apiserver status ...
	I0601 11:08:02.063905   23498 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:08:02.076735   23498 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:08:02.264102   23498 api_server.go:165] Checking apiserver status ...
	I0601 11:08:02.264179   23498 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:08:02.277980   23498 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:08:02.464099   23498 api_server.go:165] Checking apiserver status ...
	I0601 11:08:02.464167   23498 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:08:02.475978   23498 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:08:02.663240   23498 api_server.go:165] Checking apiserver status ...
	I0601 11:08:02.663305   23498 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:08:02.674432   23498 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:08:02.863705   23498 api_server.go:165] Checking apiserver status ...
	I0601 11:08:02.863818   23498 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:08:02.875098   23498 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:08:03.063374   23498 api_server.go:165] Checking apiserver status ...
	I0601 11:08:03.063447   23498 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:08:03.074143   23498 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:08:03.263346   23498 api_server.go:165] Checking apiserver status ...
	I0601 11:08:03.263437   23498 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:08:03.274238   23498 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:08:03.463565   23498 api_server.go:165] Checking apiserver status ...
	I0601 11:08:03.463640   23498 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:08:03.474344   23498 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:08:03.663552   23498 api_server.go:165] Checking apiserver status ...
	I0601 11:08:03.663627   23498 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:08:03.674412   23498 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:08:03.863702   23498 api_server.go:165] Checking apiserver status ...
	I0601 11:08:03.863779   23498 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:08:03.876028   23498 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:08:04.063253   23498 api_server.go:165] Checking apiserver status ...
	I0601 11:08:04.063331   23498 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:08:04.074457   23498 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:08:04.263727   23498 api_server.go:165] Checking apiserver status ...
	I0601 11:08:04.263793   23498 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:08:04.277807   23498 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:08:04.277830   23498 api_server.go:165] Checking apiserver status ...
	I0601 11:08:04.277869   23498 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0601 11:08:04.293110   23498 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:08:04.293146   23498 kubeadm.go:601] needs reconfigure: apiserver error: timed out waiting for the condition
	I0601 11:08:04.293152   23498 kubeadm.go:1092] stopping kube-system containers ...
	I0601 11:08:04.293162   23498 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0601 11:08:04.293228   23498 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0601 11:08:04.325345   23498 cri.go:87] found id: "210d0cb9ee945ff5c4e7df4bfb8670d8d1737c543cb60322309d344491b21cb2"
	I0601 11:08:04.325380   23498 cri.go:87] found id: "6e9d7e184abed4789b1f1d5e9279f2e6e10c04b7c1f2c361b24609a47937900c"
	I0601 11:08:04.325388   23498 cri.go:87] found id: "cd261e2d2ff3b0de0b3fe0411dec2110ca530014dcdc702e0acd927e9d6fd7f8"
	I0601 11:08:04.325394   23498 cri.go:87] found id: "5b9eea7e9f630b4f732f8810f7ecbfacf550b07152d3c2ec94cb2a7d2f311190"
	I0601 11:08:04.325399   23498 cri.go:87] found id: "ad8fa72d1866ad4c1dc86626944739f7227b699d591b5ed6f510390f961b1dd0"
	I0601 11:08:04.325406   23498 cri.go:87] found id: "56b11b45ac07d6beafde3aaf8283c976fc2b48fe111f9be2f406a2c0d0a3009b"
	I0601 11:08:04.325412   23498 cri.go:87] found id: ""
	I0601 11:08:04.325418   23498 cri.go:232] Stopping containers: [210d0cb9ee945ff5c4e7df4bfb8670d8d1737c543cb60322309d344491b21cb2 6e9d7e184abed4789b1f1d5e9279f2e6e10c04b7c1f2c361b24609a47937900c cd261e2d2ff3b0de0b3fe0411dec2110ca530014dcdc702e0acd927e9d6fd7f8 5b9eea7e9f630b4f732f8810f7ecbfacf550b07152d3c2ec94cb2a7d2f311190 ad8fa72d1866ad4c1dc86626944739f7227b699d591b5ed6f510390f961b1dd0 56b11b45ac07d6beafde3aaf8283c976fc2b48fe111f9be2f406a2c0d0a3009b]
	I0601 11:08:04.325467   23498 ssh_runner.go:195] Run: which crictl
	I0601 11:08:04.329658   23498 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop 210d0cb9ee945ff5c4e7df4bfb8670d8d1737c543cb60322309d344491b21cb2 6e9d7e184abed4789b1f1d5e9279f2e6e10c04b7c1f2c361b24609a47937900c cd261e2d2ff3b0de0b3fe0411dec2110ca530014dcdc702e0acd927e9d6fd7f8 5b9eea7e9f630b4f732f8810f7ecbfacf550b07152d3c2ec94cb2a7d2f311190 ad8fa72d1866ad4c1dc86626944739f7227b699d591b5ed6f510390f961b1dd0 56b11b45ac07d6beafde3aaf8283c976fc2b48fe111f9be2f406a2c0d0a3009b
	I0601 11:08:04.379666   23498 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0601 11:08:04.421489   23498 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0601 11:08:04.451004   23498 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Jun  1 11:07 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5653 Jun  1 11:07 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2039 Jun  1 11:07 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5601 Jun  1 11:07 /etc/kubernetes/scheduler.conf
	
	I0601 11:08:04.451071   23498 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0601 11:08:04.476064   23498 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0601 11:08:04.520643   23498 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0601 11:08:04.540150   23498 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:08:04.540214   23498 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0601 11:08:04.553185   23498 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0601 11:08:04.566317   23498 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0601 11:08:04.566402   23498 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0601 11:08:04.579042   23498 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0601 11:08:04.594799   23498 kubeadm.go:703] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0601 11:08:04.594830   23498 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0601 11:08:04.691107   23498 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0601 11:08:05.550348   23498 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0601 11:08:05.770900   23498 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0601 11:08:05.873046   23498 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0601 11:08:05.958289   23498 api_server.go:51] waiting for apiserver process to appear ...
	I0601 11:08:05.958361   23498 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:08:06.470635   23498 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:08:06.970797   23498 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:08:07.470305   23498 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:08:07.970749   23498 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:08:08.470252   23498 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:08:08.970689   23498 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:08:09.470130   23498 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:08:09.970994   23498 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:08:10.470927   23498 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:08:10.970273   23498 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:08:11.471077   23498 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:08:11.970214   23498 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:08:12.470441   23498 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:08:12.970632   23498 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:08:12.987175   23498 api_server.go:71] duration metric: took 7.028886868s to wait for apiserver process to appear ...
	I0601 11:08:12.987215   23498 api_server.go:87] waiting for apiserver healthz status ...
	I0601 11:08:12.987227   23498 api_server.go:240] Checking apiserver healthz at https://192.168.50.64:8443/healthz ...
	I0601 11:08:12.987816   23498 api_server.go:256] stopped: https://192.168.50.64:8443/healthz: Get "https://192.168.50.64:8443/healthz": dial tcp 192.168.50.64:8443: connect: connection refused
	I0601 11:08:13.488568   23498 api_server.go:240] Checking apiserver healthz at https://192.168.50.64:8443/healthz ...
	I0601 11:08:17.199310   23498 api_server.go:266] https://192.168.50.64:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0601 11:08:17.199338   23498 api_server.go:102] status: https://192.168.50.64:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0601 11:08:17.488730   23498 api_server.go:240] Checking apiserver healthz at https://192.168.50.64:8443/healthz ...
	I0601 11:08:17.497657   23498 api_server.go:266] https://192.168.50.64:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0601 11:08:17.497680   23498 api_server.go:102] status: https://192.168.50.64:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0601 11:08:17.988186   23498 api_server.go:240] Checking apiserver healthz at https://192.168.50.64:8443/healthz ...
	I0601 11:08:17.992812   23498 api_server.go:266] https://192.168.50.64:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0601 11:08:17.992840   23498 api_server.go:102] status: https://192.168.50.64:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0601 11:08:18.487968   23498 api_server.go:240] Checking apiserver healthz at https://192.168.50.64:8443/healthz ...
	I0601 11:08:18.493035   23498 api_server.go:266] https://192.168.50.64:8443/healthz returned 200:
	ok
	I0601 11:08:18.499296   23498 api_server.go:140] control plane version: v1.23.6
	I0601 11:08:18.499316   23498 api_server.go:130] duration metric: took 5.512094394s to wait for apiserver health ...
	I0601 11:08:18.499326   23498 cni.go:95] Creating CNI manager for ""
	I0601 11:08:18.499335   23498 cni.go:165] "kvm2" driver + containerd runtime found, recommending bridge
	I0601 11:08:18.501186   23498 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0601 11:08:18.502562   23498 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0601 11:08:18.511702   23498 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0601 11:08:18.540861   23498 system_pods.go:43] waiting for kube-system pods to appear ...
	I0601 11:08:18.550257   23498 system_pods.go:59] 6 kube-system pods found
	I0601 11:08:18.550287   23498 system_pods.go:61] "coredns-64897985d-cfd9b" [33da1e8b-2c7a-4988-9dfe-3162061c879e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0601 11:08:18.550294   23498 system_pods.go:61] "etcd-pause-20220601110620-7337" [d8fa1287-1138-46bd-ab96-01cf4324fd0a] Running
	I0601 11:08:18.550300   23498 system_pods.go:61] "kube-apiserver-pause-20220601110620-7337" [c09b0b2b-da78-4b5f-98d4-471f3ecfc3c1] Running
	I0601 11:08:18.550306   23498 system_pods.go:61] "kube-controller-manager-pause-20220601110620-7337" [08519271-d393-4c31-b3b5-166bddc4c3ca] Running
	I0601 11:08:18.550320   23498 system_pods.go:61] "kube-proxy-khg8x" [57bb2264-4bf6-4bf6-8d33-a600f8a192a4] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0601 11:08:18.550333   23498 system_pods.go:61] "kube-scheduler-pause-20220601110620-7337" [34099a13-3eb0-49fe-a4cc-721b2e7a9159] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0601 11:08:18.550348   23498 system_pods.go:74] duration metric: took 9.458918ms to wait for pod list to return data ...
	I0601 11:08:18.550355   23498 node_conditions.go:102] verifying NodePressure condition ...
	I0601 11:08:18.553655   23498 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0601 11:08:18.553684   23498 node_conditions.go:123] node cpu capacity is 2
	I0601 11:08:18.553695   23498 node_conditions.go:105] duration metric: took 3.33077ms to run NodePressure ...
	I0601 11:08:18.553711   23498 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0601 11:08:18.834673   23498 kubeadm.go:762] waiting for restarted kubelet to initialise ...
	I0601 11:08:18.838578   23498 kubeadm.go:777] kubelet initialised
	I0601 11:08:18.838598   23498 kubeadm.go:778] duration metric: took 3.902421ms waiting for restarted kubelet to initialise ...
	I0601 11:08:18.838604   23498 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0601 11:08:18.843092   23498 pod_ready.go:78] waiting up to 4m0s for pod "coredns-64897985d-cfd9b" in "kube-system" namespace to be "Ready" ...
	I0601 11:08:20.862913   23498 pod_ready.go:102] pod "coredns-64897985d-cfd9b" in "kube-system" namespace has status "Ready":"False"
	I0601 11:08:21.363173   23498 pod_ready.go:92] pod "coredns-64897985d-cfd9b" in "kube-system" namespace has status "Ready":"True"
	I0601 11:08:21.363206   23498 pod_ready.go:81] duration metric: took 2.520086647s waiting for pod "coredns-64897985d-cfd9b" in "kube-system" namespace to be "Ready" ...
	I0601 11:08:21.363215   23498 pod_ready.go:78] waiting up to 4m0s for pod "etcd-pause-20220601110620-7337" in "kube-system" namespace to be "Ready" ...
	I0601 11:08:23.377350   23498 pod_ready.go:102] pod "etcd-pause-20220601110620-7337" in "kube-system" namespace has status "Ready":"False"
	I0601 11:08:24.376360   23498 pod_ready.go:92] pod "etcd-pause-20220601110620-7337" in "kube-system" namespace has status "Ready":"True"
	I0601 11:08:24.376389   23498 pod_ready.go:81] duration metric: took 3.013167669s waiting for pod "etcd-pause-20220601110620-7337" in "kube-system" namespace to be "Ready" ...
	I0601 11:08:24.376402   23498 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-20220601110620-7337" in "kube-system" namespace to be "Ready" ...
	I0601 11:08:24.891182   23498 pod_ready.go:92] pod "kube-apiserver-pause-20220601110620-7337" in "kube-system" namespace has status "Ready":"True"
	I0601 11:08:24.891222   23498 pod_ready.go:81] duration metric: took 514.80909ms waiting for pod "kube-apiserver-pause-20220601110620-7337" in "kube-system" namespace to be "Ready" ...
	I0601 11:08:24.891237   23498 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-20220601110620-7337" in "kube-system" namespace to be "Ready" ...
	I0601 11:08:24.908486   23498 pod_ready.go:92] pod "kube-controller-manager-pause-20220601110620-7337" in "kube-system" namespace has status "Ready":"True"
	I0601 11:08:24.908513   23498 pod_ready.go:81] duration metric: took 17.268627ms waiting for pod "kube-controller-manager-pause-20220601110620-7337" in "kube-system" namespace to be "Ready" ...
	I0601 11:08:24.908526   23498 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-khg8x" in "kube-system" namespace to be "Ready" ...
	I0601 11:08:24.916443   23498 pod_ready.go:92] pod "kube-proxy-khg8x" in "kube-system" namespace has status "Ready":"True"
	I0601 11:08:24.916470   23498 pod_ready.go:81] duration metric: took 7.935967ms waiting for pod "kube-proxy-khg8x" in "kube-system" namespace to be "Ready" ...
	I0601 11:08:24.916481   23498 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-20220601110620-7337" in "kube-system" namespace to be "Ready" ...
	I0601 11:08:26.929610   23498 pod_ready.go:102] pod "kube-scheduler-pause-20220601110620-7337" in "kube-system" namespace has status "Ready":"False"
	I0601 11:08:28.933948   23498 pod_ready.go:102] pod "kube-scheduler-pause-20220601110620-7337" in "kube-system" namespace has status "Ready":"False"
	I0601 11:08:31.429856   23498 pod_ready.go:92] pod "kube-scheduler-pause-20220601110620-7337" in "kube-system" namespace has status "Ready":"True"
	I0601 11:08:31.429888   23498 pod_ready.go:81] duration metric: took 6.513399011s waiting for pod "kube-scheduler-pause-20220601110620-7337" in "kube-system" namespace to be "Ready" ...
	I0601 11:08:31.429899   23498 pod_ready.go:38] duration metric: took 12.591286074s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0601 11:08:31.429918   23498 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0601 11:08:31.441626   23498 ops.go:34] apiserver oom_adj: -16
	I0601 11:08:31.441647   23498 kubeadm.go:630] restartCluster took 30.22610609s
	I0601 11:08:31.441655   23498 kubeadm.go:397] StartCluster complete in 30.305403206s
	I0601 11:08:31.441673   23498 settings.go:142] acquiring lock: {Name:mk7911c5de47fcf80f8c9323820467801f048d73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:08:31.441805   23498 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 11:08:31.443068   23498 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig: {Name:mke33453e88b70ee81536548b2f75222936235aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0601 11:08:31.444165   23498 kapi.go:59] client config for pause-20220601110620-7337: &rest.Config{Host:"https://192.168.50.64:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/pause-20220601110620-7337/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/pause-20220601110620
-7337/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x17122e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0601 11:08:31.447964   23498 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "pause-20220601110620-7337" rescaled to 1
	I0601 11:08:31.448020   23498 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.50.64 Port:8443 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0601 11:08:31.448043   23498 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0601 11:08:31.450403   23498 out.go:177] * Verifying Kubernetes components...
	I0601 11:08:31.448130   23498 addons.go:415] enableAddons start: toEnable=map[], additional=[]
	I0601 11:08:31.448232   23498 config.go:178] Loaded profile config "pause-20220601110620-7337": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0601 11:08:31.451841   23498 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0601 11:08:31.451888   23498 addons.go:65] Setting storage-provisioner=true in profile "pause-20220601110620-7337"
	I0601 11:08:31.451913   23498 addons.go:153] Setting addon storage-provisioner=true in "pause-20220601110620-7337"
	I0601 11:08:31.451916   23498 addons.go:65] Setting default-storageclass=true in profile "pause-20220601110620-7337"
	I0601 11:08:31.451936   23498 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "pause-20220601110620-7337"
	W0601 11:08:31.451921   23498 addons.go:165] addon storage-provisioner should already be in state true
	I0601 11:08:31.452086   23498 host.go:66] Checking if "pause-20220601110620-7337" exists ...
	I0601 11:08:31.452263   23498 main.go:134] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0601 11:08:31.452297   23498 main.go:134] libmachine: Launching plugin server for driver kvm2
	I0601 11:08:31.452479   23498 main.go:134] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0601 11:08:31.452511   23498 main.go:134] libmachine: Launching plugin server for driver kvm2
	I0601 11:08:31.467866   23498 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:39965
	I0601 11:08:31.468340   23498 main.go:134] libmachine: () Calling .GetVersion
	I0601 11:08:31.468851   23498 main.go:134] libmachine: Using API Version  1
	I0601 11:08:31.468872   23498 main.go:134] libmachine: () Calling .SetConfigRaw
	I0601 11:08:31.469204   23498 main.go:134] libmachine: () Calling .GetMachineName
	I0601 11:08:31.469410   23498 main.go:134] libmachine: (pause-20220601110620-7337) Calling .GetState
	I0601 11:08:31.472562   23498 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:35335
	I0601 11:08:31.472580   23498 kapi.go:59] client config for pause-20220601110620-7337: &rest.Config{Host:"https://192.168.50.64:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/pause-20220601110620-7337/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/pause-20220601110620
-7337/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x17122e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0601 11:08:31.472939   23498 main.go:134] libmachine: () Calling .GetVersion
	I0601 11:08:31.473453   23498 main.go:134] libmachine: Using API Version  1
	I0601 11:08:31.473488   23498 main.go:134] libmachine: () Calling .SetConfigRaw
	I0601 11:08:31.473851   23498 main.go:134] libmachine: () Calling .GetMachineName
	I0601 11:08:31.474465   23498 main.go:134] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0601 11:08:31.474515   23498 main.go:134] libmachine: Launching plugin server for driver kvm2
	I0601 11:08:31.476243   23498 addons.go:153] Setting addon default-storageclass=true in "pause-20220601110620-7337"
	W0601 11:08:31.476266   23498 addons.go:165] addon default-storageclass should already be in state true
	I0601 11:08:31.476294   23498 host.go:66] Checking if "pause-20220601110620-7337" exists ...
	I0601 11:08:31.476732   23498 main.go:134] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0601 11:08:31.476776   23498 main.go:134] libmachine: Launching plugin server for driver kvm2
	I0601 11:08:31.491116   23498 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:44063
	I0601 11:08:31.491642   23498 main.go:134] libmachine: () Calling .GetVersion
	I0601 11:08:31.492213   23498 main.go:134] libmachine: Using API Version  1
	I0601 11:08:31.492244   23498 main.go:134] libmachine: () Calling .SetConfigRaw
	I0601 11:08:31.492646   23498 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:41685
	I0601 11:08:31.492807   23498 main.go:134] libmachine: () Calling .GetMachineName
	I0601 11:08:31.493003   23498 main.go:134] libmachine: (pause-20220601110620-7337) Calling .GetState
	I0601 11:08:31.493075   23498 main.go:134] libmachine: () Calling .GetVersion
	I0601 11:08:31.493633   23498 main.go:134] libmachine: Using API Version  1
	I0601 11:08:31.493660   23498 main.go:134] libmachine: () Calling .SetConfigRaw
	I0601 11:08:31.493991   23498 main.go:134] libmachine: () Calling .GetMachineName
	I0601 11:08:31.494578   23498 main.go:134] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0601 11:08:31.494629   23498 main.go:134] libmachine: Launching plugin server for driver kvm2
	I0601 11:08:31.494971   23498 main.go:134] libmachine: (pause-20220601110620-7337) Calling .DriverName
	I0601 11:08:31.497459   23498 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0601 11:08:31.498902   23498 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0601 11:08:31.498921   23498 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0601 11:08:31.498940   23498 main.go:134] libmachine: (pause-20220601110620-7337) Calling .GetSSHHostname
	I0601 11:08:31.502387   23498 main.go:134] libmachine: (pause-20220601110620-7337) DBG | domain pause-20220601110620-7337 has defined MAC address 52:54:00:40:c6:ea in network mk-pause-20220601110620-7337
	I0601 11:08:31.502838   23498 main.go:134] libmachine: (pause-20220601110620-7337) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:c6:ea", ip: ""} in network mk-pause-20220601110620-7337: {Iface:virbr5 ExpiryTime:2022-06-01 12:06:42 +0000 UTC Type:0 Mac:52:54:00:40:c6:ea Iaid: IPaddr:192.168.50.64 Prefix:24 Hostname:pause-20220601110620-7337 Clientid:01:52:54:00:40:c6:ea}
	I0601 11:08:31.502866   23498 main.go:134] libmachine: (pause-20220601110620-7337) DBG | domain pause-20220601110620-7337 has defined IP address 192.168.50.64 and MAC address 52:54:00:40:c6:ea in network mk-pause-20220601110620-7337
	I0601 11:08:31.503109   23498 main.go:134] libmachine: (pause-20220601110620-7337) Calling .GetSSHPort
	I0601 11:08:31.503287   23498 main.go:134] libmachine: (pause-20220601110620-7337) Calling .GetSSHKeyPath
	I0601 11:08:31.503444   23498 main.go:134] libmachine: (pause-20220601110620-7337) Calling .GetSSHUsername
	I0601 11:08:31.503609   23498 sshutil.go:53] new ssh client: &{IP:192.168.50.64 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/pause-20220601110620-7337/id_rsa Username:docker}
	I0601 11:08:31.511645   23498 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:35887
	I0601 11:08:31.512011   23498 main.go:134] libmachine: () Calling .GetVersion
	I0601 11:08:31.512532   23498 main.go:134] libmachine: Using API Version  1
	I0601 11:08:31.512556   23498 main.go:134] libmachine: () Calling .SetConfigRaw
	I0601 11:08:31.512880   23498 main.go:134] libmachine: () Calling .GetMachineName
	I0601 11:08:31.513059   23498 main.go:134] libmachine: (pause-20220601110620-7337) Calling .GetState
	I0601 11:08:31.514573   23498 main.go:134] libmachine: (pause-20220601110620-7337) Calling .DriverName
	I0601 11:08:31.514795   23498 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0601 11:08:31.514814   23498 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0601 11:08:31.514831   23498 main.go:134] libmachine: (pause-20220601110620-7337) Calling .GetSSHHostname
	I0601 11:08:31.517470   23498 main.go:134] libmachine: (pause-20220601110620-7337) DBG | domain pause-20220601110620-7337 has defined MAC address 52:54:00:40:c6:ea in network mk-pause-20220601110620-7337
	I0601 11:08:31.517931   23498 main.go:134] libmachine: (pause-20220601110620-7337) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:c6:ea", ip: ""} in network mk-pause-20220601110620-7337: {Iface:virbr5 ExpiryTime:2022-06-01 12:06:42 +0000 UTC Type:0 Mac:52:54:00:40:c6:ea Iaid: IPaddr:192.168.50.64 Prefix:24 Hostname:pause-20220601110620-7337 Clientid:01:52:54:00:40:c6:ea}
	I0601 11:08:31.517969   23498 main.go:134] libmachine: (pause-20220601110620-7337) DBG | domain pause-20220601110620-7337 has defined IP address 192.168.50.64 and MAC address 52:54:00:40:c6:ea in network mk-pause-20220601110620-7337
	I0601 11:08:31.518084   23498 main.go:134] libmachine: (pause-20220601110620-7337) Calling .GetSSHPort
	I0601 11:08:31.518240   23498 main.go:134] libmachine: (pause-20220601110620-7337) Calling .GetSSHKeyPath
	I0601 11:08:31.518380   23498 main.go:134] libmachine: (pause-20220601110620-7337) Calling .GetSSHUsername
	I0601 11:08:31.518533   23498 sshutil.go:53] new ssh client: &{IP:192.168.50.64 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/pause-20220601110620-7337/id_rsa Username:docker}
	I0601 11:08:31.560320   23498 node_ready.go:35] waiting up to 6m0s for node "pause-20220601110620-7337" to be "Ready" ...
	I0601 11:08:31.560350   23498 start.go:786] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0601 11:08:31.563697   23498 node_ready.go:49] node "pause-20220601110620-7337" has status "Ready":"True"
	I0601 11:08:31.563712   23498 node_ready.go:38] duration metric: took 3.35893ms waiting for node "pause-20220601110620-7337" to be "Ready" ...
	I0601 11:08:31.563719   23498 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0601 11:08:31.570259   23498 pod_ready.go:78] waiting up to 6m0s for pod "coredns-64897985d-cfd9b" in "kube-system" namespace to be "Ready" ...
	I0601 11:08:31.576283   23498 pod_ready.go:92] pod "coredns-64897985d-cfd9b" in "kube-system" namespace has status "Ready":"True"
	I0601 11:08:31.576305   23498 pod_ready.go:81] duration metric: took 6.020923ms waiting for pod "coredns-64897985d-cfd9b" in "kube-system" namespace to be "Ready" ...
	I0601 11:08:31.576317   23498 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-20220601110620-7337" in "kube-system" namespace to be "Ready" ...
	I0601 11:08:31.583863   23498 pod_ready.go:92] pod "etcd-pause-20220601110620-7337" in "kube-system" namespace has status "Ready":"True"
	I0601 11:08:31.583882   23498 pod_ready.go:81] duration metric: took 7.5573ms waiting for pod "etcd-pause-20220601110620-7337" in "kube-system" namespace to be "Ready" ...
	I0601 11:08:31.583893   23498 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-20220601110620-7337" in "kube-system" namespace to be "Ready" ...
	I0601 11:08:31.589762   23498 pod_ready.go:92] pod "kube-apiserver-pause-20220601110620-7337" in "kube-system" namespace has status "Ready":"True"
	I0601 11:08:31.589779   23498 pod_ready.go:81] duration metric: took 5.878861ms waiting for pod "kube-apiserver-pause-20220601110620-7337" in "kube-system" namespace to be "Ready" ...
	I0601 11:08:31.589790   23498 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-20220601110620-7337" in "kube-system" namespace to be "Ready" ...
	I0601 11:08:31.635132   23498 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0601 11:08:31.647990   23498 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0601 11:08:31.985730   23498 pod_ready.go:92] pod "kube-controller-manager-pause-20220601110620-7337" in "kube-system" namespace has status "Ready":"True"
	I0601 11:08:31.985753   23498 pod_ready.go:81] duration metric: took 395.955579ms waiting for pod "kube-controller-manager-pause-20220601110620-7337" in "kube-system" namespace to be "Ready" ...
	I0601 11:08:31.985766   23498 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-khg8x" in "kube-system" namespace to be "Ready" ...
	I0601 11:08:32.376123   23498 pod_ready.go:92] pod "kube-proxy-khg8x" in "kube-system" namespace has status "Ready":"True"
	I0601 11:08:32.376152   23498 pod_ready.go:81] duration metric: took 390.378008ms waiting for pod "kube-proxy-khg8x" in "kube-system" namespace to be "Ready" ...
	I0601 11:08:32.376164   23498 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-20220601110620-7337" in "kube-system" namespace to be "Ready" ...
	I0601 11:08:32.520036   23498 main.go:134] libmachine: Making call to close driver server
	I0601 11:08:32.520070   23498 main.go:134] libmachine: (pause-20220601110620-7337) Calling .Close
	I0601 11:08:32.520147   23498 main.go:134] libmachine: Making call to close driver server
	I0601 11:08:32.520178   23498 main.go:134] libmachine: (pause-20220601110620-7337) Calling .Close
	I0601 11:08:32.520358   23498 main.go:134] libmachine: Successfully made call to close driver server
	I0601 11:08:32.520373   23498 main.go:134] libmachine: Making call to close connection to plugin binary
	I0601 11:08:32.520384   23498 main.go:134] libmachine: Making call to close driver server
	I0601 11:08:32.520393   23498 main.go:134] libmachine: (pause-20220601110620-7337) Calling .Close
	I0601 11:08:32.522053   23498 main.go:134] libmachine: (pause-20220601110620-7337) DBG | Closing plugin on server side
	I0601 11:08:32.522070   23498 main.go:134] libmachine: (pause-20220601110620-7337) DBG | Closing plugin on server side
	I0601 11:08:32.522076   23498 main.go:134] libmachine: Successfully made call to close driver server
	I0601 11:08:32.522090   23498 main.go:134] libmachine: Making call to close connection to plugin binary
	I0601 11:08:32.522092   23498 main.go:134] libmachine: Successfully made call to close driver server
	I0601 11:08:32.522122   23498 main.go:134] libmachine: Making call to close driver server
	I0601 11:08:32.522138   23498 main.go:134] libmachine: (pause-20220601110620-7337) Calling .Close
	I0601 11:08:32.522169   23498 main.go:134] libmachine: Making call to close connection to plugin binary
	I0601 11:08:32.522195   23498 main.go:134] libmachine: Making call to close driver server
	I0601 11:08:32.522205   23498 main.go:134] libmachine: (pause-20220601110620-7337) Calling .Close
	I0601 11:08:32.522431   23498 main.go:134] libmachine: Successfully made call to close driver server
	I0601 11:08:32.522453   23498 main.go:134] libmachine: Making call to close connection to plugin binary
	I0601 11:08:32.523636   23498 main.go:134] libmachine: Successfully made call to close driver server
	I0601 11:08:32.523656   23498 main.go:134] libmachine: Making call to close connection to plugin binary
	I0601 11:08:32.523641   23498 main.go:134] libmachine: (pause-20220601110620-7337) DBG | Closing plugin on server side
	I0601 11:08:32.526660   23498 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0601 11:08:32.528354   23498 addons.go:417] enableAddons completed in 1.080226946s
	I0601 11:08:32.777023   23498 pod_ready.go:92] pod "kube-scheduler-pause-20220601110620-7337" in "kube-system" namespace has status "Ready":"True"
	I0601 11:08:32.777042   23498 pod_ready.go:81] duration metric: took 400.868139ms waiting for pod "kube-scheduler-pause-20220601110620-7337" in "kube-system" namespace to be "Ready" ...
	I0601 11:08:32.777051   23498 pod_ready.go:38] duration metric: took 1.213323943s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0601 11:08:32.777070   23498 api_server.go:51] waiting for apiserver process to appear ...
	I0601 11:08:32.777106   23498 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:08:32.793773   23498 api_server.go:71] duration metric: took 1.345711792s to wait for apiserver process to appear ...
	I0601 11:08:32.793795   23498 api_server.go:87] waiting for apiserver healthz status ...
	I0601 11:08:32.793807   23498 api_server.go:240] Checking apiserver healthz at https://192.168.50.64:8443/healthz ...
	I0601 11:08:32.798772   23498 api_server.go:266] https://192.168.50.64:8443/healthz returned 200:
	ok
	I0601 11:08:32.799783   23498 api_server.go:140] control plane version: v1.23.6
	I0601 11:08:32.799801   23498 api_server.go:130] duration metric: took 6.00013ms to wait for apiserver health ...
	I0601 11:08:32.799811   23498 system_pods.go:43] waiting for kube-system pods to appear ...
	I0601 11:08:32.983147   23498 system_pods.go:59] 7 kube-system pods found
	I0601 11:08:32.983196   23498 system_pods.go:61] "coredns-64897985d-cfd9b" [33da1e8b-2c7a-4988-9dfe-3162061c879e] Running
	I0601 11:08:32.983209   23498 system_pods.go:61] "etcd-pause-20220601110620-7337" [d8fa1287-1138-46bd-ab96-01cf4324fd0a] Running
	I0601 11:08:32.983217   23498 system_pods.go:61] "kube-apiserver-pause-20220601110620-7337" [c09b0b2b-da78-4b5f-98d4-471f3ecfc3c1] Running
	I0601 11:08:32.983224   23498 system_pods.go:61] "kube-controller-manager-pause-20220601110620-7337" [08519271-d393-4c31-b3b5-166bddc4c3ca] Running
	I0601 11:08:32.983230   23498 system_pods.go:61] "kube-proxy-khg8x" [57bb2264-4bf6-4bf6-8d33-a600f8a192a4] Running
	I0601 11:08:32.983238   23498 system_pods.go:61] "kube-scheduler-pause-20220601110620-7337" [34099a13-3eb0-49fe-a4cc-721b2e7a9159] Running
	I0601 11:08:32.983253   23498 system_pods.go:61] "storage-provisioner" [f30ddd6b-0b90-4a3a-88d9-ea548cf1fb27] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0601 11:08:32.983266   23498 system_pods.go:74] duration metric: took 183.448959ms to wait for pod list to return data ...
	I0601 11:08:32.983276   23498 default_sa.go:34] waiting for default service account to be created ...
	I0601 11:08:33.174774   23498 default_sa.go:45] found service account: "default"
	I0601 11:08:33.174796   23498 default_sa.go:55] duration metric: took 191.511248ms for default service account to be created ...
	I0601 11:08:33.174804   23498 system_pods.go:116] waiting for k8s-apps to be running ...
	I0601 11:08:33.376075   23498 system_pods.go:86] 7 kube-system pods found
	I0601 11:08:33.376104   23498 system_pods.go:89] "coredns-64897985d-cfd9b" [33da1e8b-2c7a-4988-9dfe-3162061c879e] Running
	I0601 11:08:33.376113   23498 system_pods.go:89] "etcd-pause-20220601110620-7337" [d8fa1287-1138-46bd-ab96-01cf4324fd0a] Running
	I0601 11:08:33.376120   23498 system_pods.go:89] "kube-apiserver-pause-20220601110620-7337" [c09b0b2b-da78-4b5f-98d4-471f3ecfc3c1] Running
	I0601 11:08:33.376127   23498 system_pods.go:89] "kube-controller-manager-pause-20220601110620-7337" [08519271-d393-4c31-b3b5-166bddc4c3ca] Running
	I0601 11:08:33.376134   23498 system_pods.go:89] "kube-proxy-khg8x" [57bb2264-4bf6-4bf6-8d33-a600f8a192a4] Running
	I0601 11:08:33.376146   23498 system_pods.go:89] "kube-scheduler-pause-20220601110620-7337" [34099a13-3eb0-49fe-a4cc-721b2e7a9159] Running
	I0601 11:08:33.376159   23498 system_pods.go:89] "storage-provisioner" [f30ddd6b-0b90-4a3a-88d9-ea548cf1fb27] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0601 11:08:33.376172   23498 system_pods.go:126] duration metric: took 201.361877ms to wait for k8s-apps to be running ...
	I0601 11:08:33.376183   23498 system_svc.go:44] waiting for kubelet service to be running ....
	I0601 11:08:33.376232   23498 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0601 11:08:33.393190   23498 system_svc.go:56] duration metric: took 17.00178ms WaitForService to wait for kubelet.
	I0601 11:08:33.393214   23498 kubeadm.go:572] duration metric: took 1.945164231s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0601 11:08:33.393236   23498 node_conditions.go:102] verifying NodePressure condition ...
	I0601 11:08:33.576144   23498 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0601 11:08:33.576172   23498 node_conditions.go:123] node cpu capacity is 2
	I0601 11:08:33.576182   23498 node_conditions.go:105] duration metric: took 182.941908ms to run NodePressure ...
	I0601 11:08:33.576192   23498 start.go:213] waiting for startup goroutines ...
	I0601 11:08:33.617996   23498 start.go:504] kubectl: 1.24.1, cluster: 1.23.6 (minor skew: 1)
	I0601 11:08:33.620285   23498 out.go:177] * Done! kubectl is now configured to use "pause-20220601110620-7337" cluster and "default" namespace by default

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-20220601110620-7337 -n pause-20220601110620-7337
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-20220601110620-7337 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-20220601110620-7337 logs -n 25: (1.433842377s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|----------------------------------------|----------------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                  Args                  |                Profile                 |  User   |    Version     |     Start Time      |      End Time       |
	|---------|----------------------------------------|----------------------------------------|---------|----------------|---------------------|---------------------|
	| start   | -p                                     | test-preload-20220601105919-7337       | jenkins | v1.26.0-beta.1 | 01 Jun 22 10:59 UTC | 01 Jun 22 11:01 UTC |
	|         | test-preload-20220601105919-7337       |                                        |         |                |                     |                     |
	|         | --memory=2200 --alsologtostderr        |                                        |         |                |                     |                     |
	|         | --wait=true --preload=false            |                                        |         |                |                     |                     |
	|         | --driver=kvm2                          |                                        |         |                |                     |                     |
	|         | --container-runtime=containerd         |                                        |         |                |                     |                     |
	|         | --kubernetes-version=v1.17.0           |                                        |         |                |                     |                     |
	| ssh     | -p                                     | test-preload-20220601105919-7337       | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:01 UTC | 01 Jun 22 11:01 UTC |
	|         | test-preload-20220601105919-7337       |                                        |         |                |                     |                     |
	|         | -- sudo crictl pull                    |                                        |         |                |                     |                     |
	|         | gcr.io/k8s-minikube/busybox            |                                        |         |                |                     |                     |
	| start   | -p                                     | test-preload-20220601105919-7337       | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:01 UTC | 01 Jun 22 11:02 UTC |
	|         | test-preload-20220601105919-7337       |                                        |         |                |                     |                     |
	|         | --memory=2200 --alsologtostderr        |                                        |         |                |                     |                     |
	|         | -v=1 --wait=true --driver=kvm2         |                                        |         |                |                     |                     |
	|         |  --container-runtime=containerd        |                                        |         |                |                     |                     |
	|         | --kubernetes-version=v1.17.3           |                                        |         |                |                     |                     |
	| ssh     | -p                                     | test-preload-20220601105919-7337       | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:02 UTC | 01 Jun 22 11:02 UTC |
	|         | test-preload-20220601105919-7337       |                                        |         |                |                     |                     |
	|         | -- sudo crictl image ls                |                                        |         |                |                     |                     |
	| delete  | -p                                     | test-preload-20220601105919-7337       | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:02 UTC | 01 Jun 22 11:02 UTC |
	|         | test-preload-20220601105919-7337       |                                        |         |                |                     |                     |
	| start   | -p                                     | scheduled-stop-20220601110214-7337     | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:02 UTC | 01 Jun 22 11:03 UTC |
	|         | scheduled-stop-20220601110214-7337     |                                        |         |                |                     |                     |
	|         | --memory=2048 --driver=kvm2            |                                        |         |                |                     |                     |
	|         | --container-runtime=containerd         |                                        |         |                |                     |                     |
	| stop    | -p                                     | scheduled-stop-20220601110214-7337     | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:03 UTC | 01 Jun 22 11:03 UTC |
	|         | scheduled-stop-20220601110214-7337     |                                        |         |                |                     |                     |
	|         | --cancel-scheduled                     |                                        |         |                |                     |                     |
	| stop    | -p                                     | scheduled-stop-20220601110214-7337     | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:03 UTC | 01 Jun 22 11:03 UTC |
	|         | scheduled-stop-20220601110214-7337     |                                        |         |                |                     |                     |
	|         | --schedule 15s                         |                                        |         |                |                     |                     |
	| delete  | -p                                     | scheduled-stop-20220601110214-7337     | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:04 UTC | 01 Jun 22 11:04 UTC |
	|         | scheduled-stop-20220601110214-7337     |                                        |         |                |                     |                     |
	| start   | -p                                     | kubernetes-upgrade-20220601110426-7337 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:04 UTC | 01 Jun 22 11:06 UTC |
	|         | kubernetes-upgrade-20220601110426-7337 |                                        |         |                |                     |                     |
	|         | --memory=2200                          |                                        |         |                |                     |                     |
	|         | --kubernetes-version=v1.16.0           |                                        |         |                |                     |                     |
	|         | --alsologtostderr -v=1 --driver=kvm2   |                                        |         |                |                     |                     |
	|         | --container-runtime=containerd         |                                        |         |                |                     |                     |
	| stop    | -p                                     | kubernetes-upgrade-20220601110426-7337 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:06 UTC | 01 Jun 22 11:06 UTC |
	|         | kubernetes-upgrade-20220601110426-7337 |                                        |         |                |                     |                     |
	| start   | -p                                     | offline-containerd-20220601110426-7337 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:04 UTC | 01 Jun 22 11:06 UTC |
	|         | offline-containerd-20220601110426-7337 |                                        |         |                |                     |                     |
	|         | --alsologtostderr -v=1 --memory=2048   |                                        |         |                |                     |                     |
	|         | --wait=true --driver=kvm2              |                                        |         |                |                     |                     |
	|         | --container-runtime=containerd         |                                        |         |                |                     |                     |
	| delete  | -p                                     | offline-containerd-20220601110426-7337 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:06 UTC | 01 Jun 22 11:06 UTC |
	|         | offline-containerd-20220601110426-7337 |                                        |         |                |                     |                     |
	| start   | -p                                     | running-upgrade-20220601110426-7337    | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:06 UTC | 01 Jun 22 11:07 UTC |
	|         | running-upgrade-20220601110426-7337    |                                        |         |                |                     |                     |
	|         | --memory=2200 --alsologtostderr        |                                        |         |                |                     |                     |
	|         | -v=1 --driver=kvm2                     |                                        |         |                |                     |                     |
	|         | --container-runtime=containerd         |                                        |         |                |                     |                     |
	| delete  | -p                                     | running-upgrade-20220601110426-7337    | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:07 UTC | 01 Jun 22 11:07 UTC |
	|         | running-upgrade-20220601110426-7337    |                                        |         |                |                     |                     |
	| start   | -p pause-20220601110620-7337           | pause-20220601110620-7337              | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:06 UTC | 01 Jun 22 11:07 UTC |
	|         | --memory=2048                          |                                        |         |                |                     |                     |
	|         | --install-addons=false                 |                                        |         |                |                     |                     |
	|         | --wait=all --driver=kvm2               |                                        |         |                |                     |                     |
	|         | --container-runtime=containerd         |                                        |         |                |                     |                     |
	| start   | -p                                     | kubernetes-upgrade-20220601110426-7337 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:06 UTC | 01 Jun 22 11:08 UTC |
	|         | kubernetes-upgrade-20220601110426-7337 |                                        |         |                |                     |                     |
	|         | --memory=2200                          |                                        |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6           |                                        |         |                |                     |                     |
	|         | --alsologtostderr -v=1 --driver=kvm2   |                                        |         |                |                     |                     |
	|         | --container-runtime=containerd         |                                        |         |                |                     |                     |
	| start   | -p                                     | NoKubernetes-20220601110707-7337       | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:07 UTC | 01 Jun 22 11:08 UTC |
	|         | NoKubernetes-20220601110707-7337       |                                        |         |                |                     |                     |
	|         | --driver=kvm2                          |                                        |         |                |                     |                     |
	|         | --container-runtime=containerd         |                                        |         |                |                     |                     |
	| start   | -p                                     | NoKubernetes-20220601110707-7337       | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:08 UTC | 01 Jun 22 11:08 UTC |
	|         | NoKubernetes-20220601110707-7337       |                                        |         |                |                     |                     |
	|         | --no-kubernetes --driver=kvm2          |                                        |         |                |                     |                     |
	|         | --container-runtime=containerd         |                                        |         |                |                     |                     |
	| delete  | -p                                     | NoKubernetes-20220601110707-7337       | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:08 UTC | 01 Jun 22 11:08 UTC |
	|         | NoKubernetes-20220601110707-7337       |                                        |         |                |                     |                     |
	| start   | -p                                     | kubernetes-upgrade-20220601110426-7337 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:08 UTC | 01 Jun 22 11:08 UTC |
	|         | kubernetes-upgrade-20220601110426-7337 |                                        |         |                |                     |                     |
	|         | --memory=2200                          |                                        |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6           |                                        |         |                |                     |                     |
	|         | --alsologtostderr -v=1 --driver=kvm2   |                                        |         |                |                     |                     |
	|         | --container-runtime=containerd         |                                        |         |                |                     |                     |
	| delete  | -p                                     | kubernetes-upgrade-20220601110426-7337 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:08 UTC | 01 Jun 22 11:08 UTC |
	|         | kubernetes-upgrade-20220601110426-7337 |                                        |         |                |                     |                     |
	| delete  | -p kubenet-20220601110831-7337         | kubenet-20220601110831-7337            | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:08 UTC | 01 Jun 22 11:08 UTC |
	| delete  | -p false-20220601110831-7337           | false-20220601110831-7337              | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:08 UTC | 01 Jun 22 11:08 UTC |
	| start   | -p pause-20220601110620-7337           | pause-20220601110620-7337              | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:07 UTC | 01 Jun 22 11:08 UTC |
	|         | --alsologtostderr                      |                                        |         |                |                     |                     |
	|         | -v=1 --driver=kvm2                     |                                        |         |                |                     |                     |
	|         | --container-runtime=containerd         |                                        |         |                |                     |                     |
	|---------|----------------------------------------|----------------------------------------|---------|----------------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/06/01 11:08:31
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.18.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0601 11:08:31.958466   24244 out.go:296] Setting OutFile to fd 1 ...
	I0601 11:08:31.958668   24244 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:08:31.958681   24244 out.go:309] Setting ErrFile to fd 2...
	I0601 11:08:31.958687   24244 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:08:31.958838   24244 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/bin
	I0601 11:08:31.959203   24244 out.go:303] Setting JSON to false
	I0601 11:08:31.960407   24244 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3066,"bootTime":1654078646,"procs":273,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1027-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0601 11:08:31.960484   24244 start.go:125] virtualization: kvm guest
	I0601 11:08:31.963463   24244 out.go:177] * [false-20220601110831-7337] minikube v1.26.0-beta.1 on Ubuntu 20.04 (kvm/amd64)
	I0601 11:08:31.965309   24244 out.go:177]   - MINIKUBE_LOCATION=14079
	I0601 11:08:31.965282   24244 notify.go:193] Checking for updates...
	I0601 11:08:31.966865   24244 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0601 11:08:31.968377   24244 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 11:08:31.969848   24244 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube
	I0601 11:08:31.971298   24244 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0601 11:08:31.973181   24244 config.go:178] Loaded profile config "NoKubernetes-20220601110707-7337": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v0.0.0
	I0601 11:08:31.973327   24244 config.go:178] Loaded profile config "pause-20220601110620-7337": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0601 11:08:31.973443   24244 config.go:178] Loaded profile config "stopped-upgrade-20220601110426-7337": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0601 11:08:31.973502   24244 driver.go:358] Setting default libvirt URI to qemu:///system
	I0601 11:08:32.015007   24244 out.go:177] * Using the kvm2 driver based on user configuration
	I0601 11:08:32.016699   24244 start.go:284] selected driver: kvm2
	I0601 11:08:32.016712   24244 start.go:806] validating driver "kvm2" against <nil>
	I0601 11:08:32.016726   24244 start.go:817] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0601 11:08:32.018660   24244 out.go:177] 
	W0601 11:08:32.019825   24244 out.go:239] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0601 11:08:32.021158   24244 out.go:177] 
	I0601 11:08:31.498902   23498 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0601 11:08:31.498921   23498 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0601 11:08:31.498940   23498 main.go:134] libmachine: (pause-20220601110620-7337) Calling .GetSSHHostname
	I0601 11:08:31.502387   23498 main.go:134] libmachine: (pause-20220601110620-7337) DBG | domain pause-20220601110620-7337 has defined MAC address 52:54:00:40:c6:ea in network mk-pause-20220601110620-7337
	I0601 11:08:31.502838   23498 main.go:134] libmachine: (pause-20220601110620-7337) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:c6:ea", ip: ""} in network mk-pause-20220601110620-7337: {Iface:virbr5 ExpiryTime:2022-06-01 12:06:42 +0000 UTC Type:0 Mac:52:54:00:40:c6:ea Iaid: IPaddr:192.168.50.64 Prefix:24 Hostname:pause-20220601110620-7337 Clientid:01:52:54:00:40:c6:ea}
	I0601 11:08:31.502866   23498 main.go:134] libmachine: (pause-20220601110620-7337) DBG | domain pause-20220601110620-7337 has defined IP address 192.168.50.64 and MAC address 52:54:00:40:c6:ea in network mk-pause-20220601110620-7337
	I0601 11:08:31.503109   23498 main.go:134] libmachine: (pause-20220601110620-7337) Calling .GetSSHPort
	I0601 11:08:31.503287   23498 main.go:134] libmachine: (pause-20220601110620-7337) Calling .GetSSHKeyPath
	I0601 11:08:31.503444   23498 main.go:134] libmachine: (pause-20220601110620-7337) Calling .GetSSHUsername
	I0601 11:08:31.503609   23498 sshutil.go:53] new ssh client: &{IP:192.168.50.64 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/pause-20220601110620-7337/id_rsa Username:docker}
	I0601 11:08:31.511645   23498 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:35887
	I0601 11:08:31.512011   23498 main.go:134] libmachine: () Calling .GetVersion
	I0601 11:08:31.512532   23498 main.go:134] libmachine: Using API Version  1
	I0601 11:08:31.512556   23498 main.go:134] libmachine: () Calling .SetConfigRaw
	I0601 11:08:31.512880   23498 main.go:134] libmachine: () Calling .GetMachineName
	I0601 11:08:31.513059   23498 main.go:134] libmachine: (pause-20220601110620-7337) Calling .GetState
	I0601 11:08:31.514573   23498 main.go:134] libmachine: (pause-20220601110620-7337) Calling .DriverName
	I0601 11:08:31.514795   23498 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0601 11:08:31.514814   23498 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0601 11:08:31.514831   23498 main.go:134] libmachine: (pause-20220601110620-7337) Calling .GetSSHHostname
	I0601 11:08:31.517470   23498 main.go:134] libmachine: (pause-20220601110620-7337) DBG | domain pause-20220601110620-7337 has defined MAC address 52:54:00:40:c6:ea in network mk-pause-20220601110620-7337
	I0601 11:08:31.517931   23498 main.go:134] libmachine: (pause-20220601110620-7337) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:c6:ea", ip: ""} in network mk-pause-20220601110620-7337: {Iface:virbr5 ExpiryTime:2022-06-01 12:06:42 +0000 UTC Type:0 Mac:52:54:00:40:c6:ea Iaid: IPaddr:192.168.50.64 Prefix:24 Hostname:pause-20220601110620-7337 Clientid:01:52:54:00:40:c6:ea}
	I0601 11:08:31.517969   23498 main.go:134] libmachine: (pause-20220601110620-7337) DBG | domain pause-20220601110620-7337 has defined IP address 192.168.50.64 and MAC address 52:54:00:40:c6:ea in network mk-pause-20220601110620-7337
	I0601 11:08:31.518084   23498 main.go:134] libmachine: (pause-20220601110620-7337) Calling .GetSSHPort
	I0601 11:08:31.518240   23498 main.go:134] libmachine: (pause-20220601110620-7337) Calling .GetSSHKeyPath
	I0601 11:08:31.518380   23498 main.go:134] libmachine: (pause-20220601110620-7337) Calling .GetSSHUsername
	I0601 11:08:31.518533   23498 sshutil.go:53] new ssh client: &{IP:192.168.50.64 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/pause-20220601110620-7337/id_rsa Username:docker}
	I0601 11:08:31.560320   23498 node_ready.go:35] waiting up to 6m0s for node "pause-20220601110620-7337" to be "Ready" ...
	I0601 11:08:31.560350   23498 start.go:786] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0601 11:08:31.563697   23498 node_ready.go:49] node "pause-20220601110620-7337" has status "Ready":"True"
	I0601 11:08:31.563712   23498 node_ready.go:38] duration metric: took 3.35893ms waiting for node "pause-20220601110620-7337" to be "Ready" ...
	I0601 11:08:31.563719   23498 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0601 11:08:31.570259   23498 pod_ready.go:78] waiting up to 6m0s for pod "coredns-64897985d-cfd9b" in "kube-system" namespace to be "Ready" ...
	I0601 11:08:31.576283   23498 pod_ready.go:92] pod "coredns-64897985d-cfd9b" in "kube-system" namespace has status "Ready":"True"
	I0601 11:08:31.576305   23498 pod_ready.go:81] duration metric: took 6.020923ms waiting for pod "coredns-64897985d-cfd9b" in "kube-system" namespace to be "Ready" ...
	I0601 11:08:31.576317   23498 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-20220601110620-7337" in "kube-system" namespace to be "Ready" ...
	I0601 11:08:31.583863   23498 pod_ready.go:92] pod "etcd-pause-20220601110620-7337" in "kube-system" namespace has status "Ready":"True"
	I0601 11:08:31.583882   23498 pod_ready.go:81] duration metric: took 7.5573ms waiting for pod "etcd-pause-20220601110620-7337" in "kube-system" namespace to be "Ready" ...
	I0601 11:08:31.583893   23498 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-20220601110620-7337" in "kube-system" namespace to be "Ready" ...
	I0601 11:08:31.589762   23498 pod_ready.go:92] pod "kube-apiserver-pause-20220601110620-7337" in "kube-system" namespace has status "Ready":"True"
	I0601 11:08:31.589779   23498 pod_ready.go:81] duration metric: took 5.878861ms waiting for pod "kube-apiserver-pause-20220601110620-7337" in "kube-system" namespace to be "Ready" ...
	I0601 11:08:31.589790   23498 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-20220601110620-7337" in "kube-system" namespace to be "Ready" ...
	I0601 11:08:31.635132   23498 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0601 11:08:31.647990   23498 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0601 11:08:31.985730   23498 pod_ready.go:92] pod "kube-controller-manager-pause-20220601110620-7337" in "kube-system" namespace has status "Ready":"True"
	I0601 11:08:31.985753   23498 pod_ready.go:81] duration metric: took 395.955579ms waiting for pod "kube-controller-manager-pause-20220601110620-7337" in "kube-system" namespace to be "Ready" ...
	I0601 11:08:31.985766   23498 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-khg8x" in "kube-system" namespace to be "Ready" ...
	I0601 11:08:32.376123   23498 pod_ready.go:92] pod "kube-proxy-khg8x" in "kube-system" namespace has status "Ready":"True"
	I0601 11:08:32.376152   23498 pod_ready.go:81] duration metric: took 390.378008ms waiting for pod "kube-proxy-khg8x" in "kube-system" namespace to be "Ready" ...
	I0601 11:08:32.376164   23498 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-20220601110620-7337" in "kube-system" namespace to be "Ready" ...
	I0601 11:08:32.520036   23498 main.go:134] libmachine: Making call to close driver server
	I0601 11:08:32.520070   23498 main.go:134] libmachine: (pause-20220601110620-7337) Calling .Close
	I0601 11:08:32.520147   23498 main.go:134] libmachine: Making call to close driver server
	I0601 11:08:32.520178   23498 main.go:134] libmachine: (pause-20220601110620-7337) Calling .Close
	I0601 11:08:32.520358   23498 main.go:134] libmachine: Successfully made call to close driver server
	I0601 11:08:32.520373   23498 main.go:134] libmachine: Making call to close connection to plugin binary
	I0601 11:08:32.520384   23498 main.go:134] libmachine: Making call to close driver server
	I0601 11:08:32.520393   23498 main.go:134] libmachine: (pause-20220601110620-7337) Calling .Close
	I0601 11:08:32.522053   23498 main.go:134] libmachine: (pause-20220601110620-7337) DBG | Closing plugin on server side
	I0601 11:08:32.522070   23498 main.go:134] libmachine: (pause-20220601110620-7337) DBG | Closing plugin on server side
	I0601 11:08:32.522076   23498 main.go:134] libmachine: Successfully made call to close driver server
	I0601 11:08:32.522090   23498 main.go:134] libmachine: Making call to close connection to plugin binary
	I0601 11:08:32.522092   23498 main.go:134] libmachine: Successfully made call to close driver server
	I0601 11:08:32.522122   23498 main.go:134] libmachine: Making call to close driver server
	I0601 11:08:32.522138   23498 main.go:134] libmachine: (pause-20220601110620-7337) Calling .Close
	I0601 11:08:32.522169   23498 main.go:134] libmachine: Making call to close connection to plugin binary
	I0601 11:08:32.522195   23498 main.go:134] libmachine: Making call to close driver server
	I0601 11:08:32.522205   23498 main.go:134] libmachine: (pause-20220601110620-7337) Calling .Close
	I0601 11:08:32.522431   23498 main.go:134] libmachine: Successfully made call to close driver server
	I0601 11:08:32.522453   23498 main.go:134] libmachine: Making call to close connection to plugin binary
	I0601 11:08:32.523636   23498 main.go:134] libmachine: Successfully made call to close driver server
	I0601 11:08:32.523656   23498 main.go:134] libmachine: Making call to close connection to plugin binary
	I0601 11:08:32.523641   23498 main.go:134] libmachine: (pause-20220601110620-7337) DBG | Closing plugin on server side
	I0601 11:08:32.526660   23498 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0601 11:08:32.528354   23498 addons.go:417] enableAddons completed in 1.080226946s
	I0601 11:08:32.777023   23498 pod_ready.go:92] pod "kube-scheduler-pause-20220601110620-7337" in "kube-system" namespace has status "Ready":"True"
	I0601 11:08:32.777042   23498 pod_ready.go:81] duration metric: took 400.868139ms waiting for pod "kube-scheduler-pause-20220601110620-7337" in "kube-system" namespace to be "Ready" ...
	I0601 11:08:32.777051   23498 pod_ready.go:38] duration metric: took 1.213323943s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0601 11:08:32.777070   23498 api_server.go:51] waiting for apiserver process to appear ...
	I0601 11:08:32.777106   23498 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:08:32.793773   23498 api_server.go:71] duration metric: took 1.345711792s to wait for apiserver process to appear ...
	I0601 11:08:32.793795   23498 api_server.go:87] waiting for apiserver healthz status ...
	I0601 11:08:32.793807   23498 api_server.go:240] Checking apiserver healthz at https://192.168.50.64:8443/healthz ...
	I0601 11:08:32.798772   23498 api_server.go:266] https://192.168.50.64:8443/healthz returned 200:
	ok
	I0601 11:08:32.799783   23498 api_server.go:140] control plane version: v1.23.6
	I0601 11:08:32.799801   23498 api_server.go:130] duration metric: took 6.00013ms to wait for apiserver health ...
	I0601 11:08:32.799811   23498 system_pods.go:43] waiting for kube-system pods to appear ...
	I0601 11:08:32.983147   23498 system_pods.go:59] 7 kube-system pods found
	I0601 11:08:32.983196   23498 system_pods.go:61] "coredns-64897985d-cfd9b" [33da1e8b-2c7a-4988-9dfe-3162061c879e] Running
	I0601 11:08:32.983209   23498 system_pods.go:61] "etcd-pause-20220601110620-7337" [d8fa1287-1138-46bd-ab96-01cf4324fd0a] Running
	I0601 11:08:32.983217   23498 system_pods.go:61] "kube-apiserver-pause-20220601110620-7337" [c09b0b2b-da78-4b5f-98d4-471f3ecfc3c1] Running
	I0601 11:08:32.983224   23498 system_pods.go:61] "kube-controller-manager-pause-20220601110620-7337" [08519271-d393-4c31-b3b5-166bddc4c3ca] Running
	I0601 11:08:32.983230   23498 system_pods.go:61] "kube-proxy-khg8x" [57bb2264-4bf6-4bf6-8d33-a600f8a192a4] Running
	I0601 11:08:32.983238   23498 system_pods.go:61] "kube-scheduler-pause-20220601110620-7337" [34099a13-3eb0-49fe-a4cc-721b2e7a9159] Running
	I0601 11:08:32.983253   23498 system_pods.go:61] "storage-provisioner" [f30ddd6b-0b90-4a3a-88d9-ea548cf1fb27] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0601 11:08:32.983266   23498 system_pods.go:74] duration metric: took 183.448959ms to wait for pod list to return data ...
	I0601 11:08:32.983276   23498 default_sa.go:34] waiting for default service account to be created ...
	I0601 11:08:33.174774   23498 default_sa.go:45] found service account: "default"
	I0601 11:08:33.174796   23498 default_sa.go:55] duration metric: took 191.511248ms for default service account to be created ...
	I0601 11:08:33.174804   23498 system_pods.go:116] waiting for k8s-apps to be running ...
	I0601 11:08:33.376075   23498 system_pods.go:86] 7 kube-system pods found
	I0601 11:08:33.376104   23498 system_pods.go:89] "coredns-64897985d-cfd9b" [33da1e8b-2c7a-4988-9dfe-3162061c879e] Running
	I0601 11:08:33.376113   23498 system_pods.go:89] "etcd-pause-20220601110620-7337" [d8fa1287-1138-46bd-ab96-01cf4324fd0a] Running
	I0601 11:08:33.376120   23498 system_pods.go:89] "kube-apiserver-pause-20220601110620-7337" [c09b0b2b-da78-4b5f-98d4-471f3ecfc3c1] Running
	I0601 11:08:33.376127   23498 system_pods.go:89] "kube-controller-manager-pause-20220601110620-7337" [08519271-d393-4c31-b3b5-166bddc4c3ca] Running
	I0601 11:08:33.376134   23498 system_pods.go:89] "kube-proxy-khg8x" [57bb2264-4bf6-4bf6-8d33-a600f8a192a4] Running
	I0601 11:08:33.376146   23498 system_pods.go:89] "kube-scheduler-pause-20220601110620-7337" [34099a13-3eb0-49fe-a4cc-721b2e7a9159] Running
	I0601 11:08:33.376159   23498 system_pods.go:89] "storage-provisioner" [f30ddd6b-0b90-4a3a-88d9-ea548cf1fb27] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0601 11:08:33.376172   23498 system_pods.go:126] duration metric: took 201.361877ms to wait for k8s-apps to be running ...
	I0601 11:08:33.376183   23498 system_svc.go:44] waiting for kubelet service to be running ....
	I0601 11:08:33.376232   23498 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0601 11:08:33.393190   23498 system_svc.go:56] duration metric: took 17.00178ms WaitForService to wait for kubelet.
	I0601 11:08:33.393214   23498 kubeadm.go:572] duration metric: took 1.945164231s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0601 11:08:33.393236   23498 node_conditions.go:102] verifying NodePressure condition ...
	I0601 11:08:33.576144   23498 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0601 11:08:33.576172   23498 node_conditions.go:123] node cpu capacity is 2
	I0601 11:08:33.576182   23498 node_conditions.go:105] duration metric: took 182.941908ms to run NodePressure ...
	I0601 11:08:33.576192   23498 start.go:213] waiting for startup goroutines ...
	I0601 11:08:33.617996   23498 start.go:504] kubectl: 1.24.1, cluster: 1.23.6 (minor skew: 1)
	I0601 11:08:33.620285   23498 out.go:177] * Done! kubectl is now configured to use "pause-20220601110620-7337" cluster and "default" namespace by default
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	4fecce19f81a8       6e38f40d628db       1 second ago         Running             storage-provisioner       0                   dc4d5114d6c4c
	d47589a815122       a4ca41631cc7a       15 seconds ago       Running             coredns                   1                   f549f250919c9
	7faa949313182       4c03754524064       15 seconds ago       Running             kube-proxy                1                   ccb92bcae55e9
	a093596174772       595f327f224a4       20 seconds ago       Running             kube-scheduler            1                   82ab164bfcc70
	43e0059e63892       df7b72818ad2e       21 seconds ago       Running             kube-controller-manager   1                   bfe9b69611c25
	311b96fbcda82       25f8c7f3da61c       22 seconds ago       Running             etcd                      1                   a77581603b9fe
	1c5a78f9d34b4       8fa62c12256df       22 seconds ago       Running             kube-apiserver            1                   9a28dc3c3ac57
	210d0cb9ee945       a4ca41631cc7a       52 seconds ago       Exited              coredns                   0                   70c4246720ef5
	6e9d7e184abed       4c03754524064       54 seconds ago       Exited              kube-proxy                0                   8cd8b5398a96f
	cd261e2d2ff3b       25f8c7f3da61c       About a minute ago   Exited              etcd                      0                   aa1161c9a565f
	5b9eea7e9f630       595f327f224a4       About a minute ago   Exited              kube-scheduler            0                   a93feceb1c90c
	ad8fa72d1866a       df7b72818ad2e       About a minute ago   Exited              kube-controller-manager   0                   de728a96e41e0
	56b11b45ac07d       8fa62c12256df       About a minute ago   Exited              kube-apiserver            0                   40aa4efa0be8d
	
	* 
	* ==> containerd <==
	* -- Journal begins at Wed 2022-06-01 11:06:38 UTC, ends at Wed 2022-06-01 11:08:34 UTC. --
	Jun 01 11:08:18 pause-20220601110620-7337 containerd[3769]: time="2022-06-01T11:08:18.851214019Z" level=info msg="StopPodSandbox for \"70c4246720ef5a606dd9e5e6f3da85828e5fdf9d8a2f1a949c2be8c089ec1ee2\""
	Jun 01 11:08:18 pause-20220601110620-7337 containerd[3769]: time="2022-06-01T11:08:18.851306939Z" level=info msg="Container to stop \"210d0cb9ee945ff5c4e7df4bfb8670d8d1737c543cb60322309d344491b21cb2\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Jun 01 11:08:18 pause-20220601110620-7337 containerd[3769]: time="2022-06-01T11:08:18.937172722Z" level=info msg="StartContainer for \"7faa94931318263b7fb674322582984fa4ba2d560fc1092bbf6b47a1a27ca6a2\" returns successfully"
	Jun 01 11:08:18 pause-20220601110620-7337 containerd[3769]: time="2022-06-01T11:08:18.937683701Z" level=info msg="TearDown network for sandbox \"70c4246720ef5a606dd9e5e6f3da85828e5fdf9d8a2f1a949c2be8c089ec1ee2\" successfully"
	Jun 01 11:08:18 pause-20220601110620-7337 containerd[3769]: time="2022-06-01T11:08:18.937813409Z" level=info msg="StopPodSandbox for \"70c4246720ef5a606dd9e5e6f3da85828e5fdf9d8a2f1a949c2be8c089ec1ee2\" returns successfully"
	Jun 01 11:08:18 pause-20220601110620-7337 containerd[3769]: time="2022-06-01T11:08:18.938346116Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-64897985d-cfd9b,Uid:33da1e8b-2c7a-4988-9dfe-3162061c879e,Namespace:kube-system,Attempt:1,}"
	Jun 01 11:08:19 pause-20220601110620-7337 containerd[3769]: time="2022-06-01T11:08:19.081212884Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 01 11:08:19 pause-20220601110620-7337 containerd[3769]: time="2022-06-01T11:08:19.081487460Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 01 11:08:19 pause-20220601110620-7337 containerd[3769]: time="2022-06-01T11:08:19.081506222Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 01 11:08:19 pause-20220601110620-7337 containerd[3769]: time="2022-06-01T11:08:19.082270446Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f549f250919c9b052dbdacda321a7afa42d4fbaec54b233b0c62f63ff09070d3 pid=4861 runtime=io.containerd.runc.v2
	Jun 01 11:08:19 pause-20220601110620-7337 containerd[3769]: time="2022-06-01T11:08:19.532795688Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-64897985d-cfd9b,Uid:33da1e8b-2c7a-4988-9dfe-3162061c879e,Namespace:kube-system,Attempt:1,} returns sandbox id \"f549f250919c9b052dbdacda321a7afa42d4fbaec54b233b0c62f63ff09070d3\""
	Jun 01 11:08:19 pause-20220601110620-7337 containerd[3769]: time="2022-06-01T11:08:19.536695036Z" level=info msg="CreateContainer within sandbox \"f549f250919c9b052dbdacda321a7afa42d4fbaec54b233b0c62f63ff09070d3\" for container &ContainerMetadata{Name:coredns,Attempt:1,}"
	Jun 01 11:08:19 pause-20220601110620-7337 containerd[3769]: time="2022-06-01T11:08:19.567893484Z" level=info msg="CreateContainer within sandbox \"f549f250919c9b052dbdacda321a7afa42d4fbaec54b233b0c62f63ff09070d3\" for &ContainerMetadata{Name:coredns,Attempt:1,} returns container id \"d47589a81512246789bea9be7fefc1a62aab5d532059fb8d70b2dd3e68d41b17\""
	Jun 01 11:08:19 pause-20220601110620-7337 containerd[3769]: time="2022-06-01T11:08:19.569306290Z" level=info msg="StartContainer for \"d47589a81512246789bea9be7fefc1a62aab5d532059fb8d70b2dd3e68d41b17\""
	Jun 01 11:08:19 pause-20220601110620-7337 containerd[3769]: time="2022-06-01T11:08:19.680383933Z" level=info msg="StartContainer for \"d47589a81512246789bea9be7fefc1a62aab5d532059fb8d70b2dd3e68d41b17\" returns successfully"
	Jun 01 11:08:32 pause-20220601110620-7337 containerd[3769]: time="2022-06-01T11:08:32.827482914Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:storage-provisioner,Uid:f30ddd6b-0b90-4a3a-88d9-ea548cf1fb27,Namespace:kube-system,Attempt:0,}"
	Jun 01 11:08:32 pause-20220601110620-7337 containerd[3769]: time="2022-06-01T11:08:32.853281542Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 01 11:08:32 pause-20220601110620-7337 containerd[3769]: time="2022-06-01T11:08:32.853394450Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 01 11:08:32 pause-20220601110620-7337 containerd[3769]: time="2022-06-01T11:08:32.853406138Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 01 11:08:32 pause-20220601110620-7337 containerd[3769]: time="2022-06-01T11:08:32.853880659Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/dc4d5114d6c4cd25baa25d9baf09c7be8d95597da1b311305c7046740723a809 pid=5029 runtime=io.containerd.runc.v2
	Jun 01 11:08:33 pause-20220601110620-7337 containerd[3769]: time="2022-06-01T11:08:33.252464975Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:storage-provisioner,Uid:f30ddd6b-0b90-4a3a-88d9-ea548cf1fb27,Namespace:kube-system,Attempt:0,} returns sandbox id \"dc4d5114d6c4cd25baa25d9baf09c7be8d95597da1b311305c7046740723a809\""
	Jun 01 11:08:33 pause-20220601110620-7337 containerd[3769]: time="2022-06-01T11:08:33.260126399Z" level=info msg="CreateContainer within sandbox \"dc4d5114d6c4cd25baa25d9baf09c7be8d95597da1b311305c7046740723a809\" for container &ContainerMetadata{Name:storage-provisioner,Attempt:0,}"
	Jun 01 11:08:33 pause-20220601110620-7337 containerd[3769]: time="2022-06-01T11:08:33.305293268Z" level=info msg="CreateContainer within sandbox \"dc4d5114d6c4cd25baa25d9baf09c7be8d95597da1b311305c7046740723a809\" for &ContainerMetadata{Name:storage-provisioner,Attempt:0,} returns container id \"4fecce19f81a84509183048e804a408cf72b6e089e3c52436d0d708b223d1260\""
	Jun 01 11:08:33 pause-20220601110620-7337 containerd[3769]: time="2022-06-01T11:08:33.309501955Z" level=info msg="StartContainer for \"4fecce19f81a84509183048e804a408cf72b6e089e3c52436d0d708b223d1260\""
	Jun 01 11:08:33 pause-20220601110620-7337 containerd[3769]: time="2022-06-01T11:08:33.418311103Z" level=info msg="StartContainer for \"4fecce19f81a84509183048e804a408cf72b6e089e3c52436d0d708b223d1260\" returns successfully"
	
	* 
	* ==> coredns [210d0cb9ee945ff5c4e7df4bfb8670d8d1737c543cb60322309d344491b21cb2] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	
	* 
	* ==> coredns [d47589a81512246789bea9be7fefc1a62aab5d532059fb8d70b2dd3e68d41b17] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = 7ae91e86dd75dee9ae501cb58003198b
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	
	* 
	* ==> describe nodes <==
	* Name:               pause-20220601110620-7337
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-20220601110620-7337
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4a356b2b7b41c6be3e1e342298908c27bb98ce92
	                    minikube.k8s.io/name=pause-20220601110620-7337
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_06_01T11_07_27_0700
	                    minikube.k8s.io/version=v1.26.0-beta.1
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 01 Jun 2022 11:07:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-20220601110620-7337
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 01 Jun 2022 11:08:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 01 Jun 2022 11:08:17 +0000   Wed, 01 Jun 2022 11:07:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 01 Jun 2022 11:08:17 +0000   Wed, 01 Jun 2022 11:07:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 01 Jun 2022 11:08:17 +0000   Wed, 01 Jun 2022 11:07:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 01 Jun 2022 11:08:17 +0000   Wed, 01 Jun 2022 11:07:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.64
	  Hostname:    pause-20220601110620-7337
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2034396Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2034396Ki
	  pods:               110
	System Info:
	  Machine ID:                 5041ca1397ac4627af301b21abe77af4
	  System UUID:                5041ca13-97ac-4627-af30-1b21abe77af4
	  Boot ID:                    8025d7d0-7d07-45f9-8b61-1e2014e58b60
	  Kernel Version:             4.19.235
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.6.4
	  Kubelet Version:            v1.23.6
	  Kube-Proxy Version:         v1.23.6
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-64897985d-cfd9b                              100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     55s
	  kube-system                 etcd-pause-20220601110620-7337                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         62s
	  kube-system                 kube-apiserver-pause-20220601110620-7337             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         68s
	  kube-system                 kube-controller-manager-pause-20220601110620-7337    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         62s
	  kube-system                 kube-proxy-khg8x                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         55s
	  kube-system                 kube-scheduler-pause-20220601110620-7337             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         62s
	  kube-system                 storage-provisioner                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From        Message
	  ----    ------                   ----               ----        -------
	  Normal  Starting                 15s                kube-proxy  
	  Normal  Starting                 53s                kube-proxy  
	  Normal  NodeHasSufficientMemory  77s (x4 over 77s)  kubelet     Node pause-20220601110620-7337 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    77s (x4 over 77s)  kubelet     Node pause-20220601110620-7337 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     77s (x4 over 77s)  kubelet     Node pause-20220601110620-7337 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  77s                kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 77s                kubelet     Starting kubelet.
	  Normal  NodeHasNoDiskPressure    62s                kubelet     Node pause-20220601110620-7337 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     62s                kubelet     Node pause-20220601110620-7337 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  62s                kubelet     Node pause-20220601110620-7337 status is now: NodeHasSufficientMemory
	  Normal  Starting                 62s                kubelet     Starting kubelet.
	  Normal  NodeAllocatableEnforced  62s                kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                56s                kubelet     Node pause-20220601110620-7337 status is now: NodeReady
	  Normal  Starting                 24s                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  23s (x8 over 24s)  kubelet     Node pause-20220601110620-7337 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23s (x8 over 24s)  kubelet     Node pause-20220601110620-7337 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23s (x7 over 24s)  kubelet     Node pause-20220601110620-7337 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  23s                kubelet     Updated Node Allocatable limit across pods
	
	* 
	* ==> dmesg <==
	* [  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.040011] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.028005] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.569727] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.360382] systemd-fstab-generator[1165]: Ignoring "noauto" for root device
	[  +0.181064] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000001] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +0.802847] SELinux: unrecognized netlink message: protocol=0 nlmsg_type=106 sclass=netlink_route_socket pid=1737 comm=systemd-network
	[  +3.116514] NFSD: the nfsdcld client tracking upcall will be removed in 3.10. Please transition to using nfsdcltrack.
	[ +18.267726] systemd-fstab-generator[2153]: Ignoring "noauto" for root device
	[Jun 1 11:07] systemd-fstab-generator[2186]: Ignoring "noauto" for root device
	[  +0.161618] systemd-fstab-generator[2197]: Ignoring "noauto" for root device
	[  +0.368332] systemd-fstab-generator[2231]: Ignoring "noauto" for root device
	[  +6.441451] systemd-fstab-generator[2429]: Ignoring "noauto" for root device
	[ +15.264602] systemd-fstab-generator[2814]: Ignoring "noauto" for root device
	[ +13.643916] kauditd_printk_skb: 38 callbacks suppressed
	[  +7.474164] kauditd_printk_skb: 86 callbacks suppressed
	[  +5.478041] kauditd_printk_skb: 20 callbacks suppressed
	[  +2.719313] systemd-fstab-generator[3722]: Ignoring "noauto" for root device
	[  +0.149449] systemd-fstab-generator[3733]: Ignoring "noauto" for root device
	[  +0.315115] systemd-fstab-generator[3761]: Ignoring "noauto" for root device
	[  +3.842952] kauditd_printk_skb: 8 callbacks suppressed
	[Jun 1 11:08] systemd-fstab-generator[4279]: Ignoring "noauto" for root device
	[ +13.219223] kauditd_printk_skb: 53 callbacks suppressed
	[ +11.617220] kauditd_printk_skb: 23 callbacks suppressed
	
	* 
	* ==> etcd [311b96fbcda822ef4b02d7d96f277235c4c1f3b7646779f4da95885b73b546ac] <==
	* {"level":"info","ts":"2022-06-01T11:08:13.941Z","caller":"etcdserver/server.go:843","msg":"starting etcd server","local-member-id":"7e00f7fcc1a7adc9","local-server-version":"3.5.1","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2022-06-01T11:08:13.962Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-06-01T11:08:13.966Z","caller":"etcdserver/server.go:744","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2022-06-01T11:08:13.966Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7e00f7fcc1a7adc9 switched to configuration voters=(9079529513731730889)"}
	{"level":"info","ts":"2022-06-01T11:08:13.966Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"c6005d374c1772c0","local-member-id":"7e00f7fcc1a7adc9","added-peer-id":"7e00f7fcc1a7adc9","added-peer-peer-urls":["https://192.168.50.64:2380"]}
	{"level":"info","ts":"2022-06-01T11:08:13.966Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"c6005d374c1772c0","local-member-id":"7e00f7fcc1a7adc9","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-01T11:08:13.966Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-01T11:08:13.973Z","caller":"embed/etcd.go:276","msg":"now serving peer/client/metrics","local-member-id":"7e00f7fcc1a7adc9","initial-advertise-peer-urls":["https://192.168.50.64:2380"],"listen-peer-urls":["https://192.168.50.64:2380"],"advertise-client-urls":["https://192.168.50.64:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.64:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-06-01T11:08:13.973Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-06-01T11:08:13.974Z","caller":"embed/etcd.go:580","msg":"serving peer traffic","address":"192.168.50.64:2380"}
	{"level":"info","ts":"2022-06-01T11:08:13.974Z","caller":"embed/etcd.go:552","msg":"cmux::serve","address":"192.168.50.64:2380"}
	{"level":"info","ts":"2022-06-01T11:08:14.805Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7e00f7fcc1a7adc9 is starting a new election at term 2"}
	{"level":"info","ts":"2022-06-01T11:08:14.805Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7e00f7fcc1a7adc9 became pre-candidate at term 2"}
	{"level":"info","ts":"2022-06-01T11:08:14.805Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7e00f7fcc1a7adc9 received MsgPreVoteResp from 7e00f7fcc1a7adc9 at term 2"}
	{"level":"info","ts":"2022-06-01T11:08:14.805Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7e00f7fcc1a7adc9 became candidate at term 3"}
	{"level":"info","ts":"2022-06-01T11:08:14.805Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7e00f7fcc1a7adc9 received MsgVoteResp from 7e00f7fcc1a7adc9 at term 3"}
	{"level":"info","ts":"2022-06-01T11:08:14.805Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7e00f7fcc1a7adc9 became leader at term 3"}
	{"level":"info","ts":"2022-06-01T11:08:14.805Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7e00f7fcc1a7adc9 elected leader 7e00f7fcc1a7adc9 at term 3"}
	{"level":"info","ts":"2022-06-01T11:08:14.808Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"7e00f7fcc1a7adc9","local-member-attributes":"{Name:pause-20220601110620-7337 ClientURLs:[https://192.168.50.64:2379]}","request-path":"/0/members/7e00f7fcc1a7adc9/attributes","cluster-id":"c6005d374c1772c0","publish-timeout":"7s"}
	{"level":"info","ts":"2022-06-01T11:08:14.809Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-06-01T11:08:14.809Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-06-01T11:08:14.811Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-06-01T11:08:14.818Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.50.64:2379"}
	{"level":"info","ts":"2022-06-01T11:08:14.820Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-06-01T11:08:14.820Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	
	* 
	* ==> etcd [cd261e2d2ff3b0de0b3fe0411dec2110ca530014dcdc702e0acd927e9d6fd7f8] <==
	* {"level":"warn","ts":"2022-06-01T11:07:43.799Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"768.029089ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-64897985d-b6g8m\" ","response":"range_response_count:1 size:4626"}
	{"level":"info","ts":"2022-06-01T11:07:43.799Z","caller":"traceutil/trace.go:171","msg":"trace[1963439830] range","detail":"{range_begin:/registry/pods/kube-system/coredns-64897985d-b6g8m; range_end:; response_count:1; response_revision:470; }","duration":"768.084544ms","start":"2022-06-01T11:07:43.031Z","end":"2022-06-01T11:07:43.799Z","steps":["trace[1963439830] 'agreement among raft nodes before linearized reading'  (duration: 767.996415ms)"],"step_count":1}
	{"level":"warn","ts":"2022-06-01T11:07:43.799Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-06-01T11:07:43.031Z","time spent":"768.126345ms","remote":"127.0.0.1:48750","response type":"/etcdserverpb.KV/Range","request count":0,"request size":52,"response count":1,"response size":4649,"request content":"key:\"/registry/pods/kube-system/coredns-64897985d-b6g8m\" "}
	{"level":"info","ts":"2022-06-01T11:07:43.898Z","caller":"traceutil/trace.go:171","msg":"trace[205847652] linearizableReadLoop","detail":"{readStateIndex:483; appliedIndex:483; }","duration":"101.356832ms","start":"2022-06-01T11:07:43.797Z","end":"2022-06-01T11:07:43.898Z","steps":["trace[205847652] 'read index received'  (duration: 101.350079ms)","trace[205847652] 'applied index is now lower than readState.Index'  (duration: 5.845µs)"],"step_count":2}
	{"level":"warn","ts":"2022-06-01T11:07:44.441Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"1.040582734s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2022-06-01T11:07:44.441Z","caller":"traceutil/trace.go:171","msg":"trace[1770652811] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:470; }","duration":"1.040654584s","start":"2022-06-01T11:07:43.401Z","end":"2022-06-01T11:07:44.441Z","steps":["trace[1770652811] 'agreement among raft nodes before linearized reading'  (duration: 497.956946ms)","trace[1770652811] 'range keys from in-memory index tree'  (duration: 542.609703ms)"],"step_count":2}
	{"level":"warn","ts":"2022-06-01T11:07:44.441Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-06-01T11:07:43.401Z","time spent":"1.040711815s","remote":"127.0.0.1:48848","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2022-06-01T11:07:44.442Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"1.183534976s","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2022-06-01T11:07:44.442Z","caller":"traceutil/trace.go:171","msg":"trace[31237833] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:470; }","duration":"1.183603526s","start":"2022-06-01T11:07:43.258Z","end":"2022-06-01T11:07:44.442Z","steps":["trace[31237833] 'agreement among raft nodes before linearized reading'  (duration: 640.649127ms)","trace[31237833] 'range keys from in-memory index tree'  (duration: 542.873748ms)"],"step_count":2}
	{"level":"warn","ts":"2022-06-01T11:07:44.442Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"543.112777ms","expected-duration":"100ms","prefix":"","request":"header:<ID:12522682208803674963 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/coredns-64897985d-b6g8m.16f47a8272a1f6cf\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/coredns-64897985d-b6g8m.16f47a8272a1f6cf\" value_size:625 lease:3299310171948898683 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2022-06-01T11:07:44.442Z","caller":"traceutil/trace.go:171","msg":"trace[57789242] linearizableReadLoop","detail":"{readStateIndex:484; appliedIndex:483; }","duration":"543.237143ms","start":"2022-06-01T11:07:43.899Z","end":"2022-06-01T11:07:44.442Z","steps":["trace[57789242] 'read index received'  (duration: 325.465367ms)","trace[57789242] 'applied index is now lower than readState.Index'  (duration: 217.77096ms)"],"step_count":2}
	{"level":"info","ts":"2022-06-01T11:07:44.442Z","caller":"traceutil/trace.go:171","msg":"trace[1011451717] transaction","detail":"{read_only:false; response_revision:471; number_of_response:1; }","duration":"599.254141ms","start":"2022-06-01T11:07:43.843Z","end":"2022-06-01T11:07:44.442Z","steps":["trace[1011451717] 'process raft request'  (duration: 56.07336ms)","trace[1011451717] 'compare'  (duration: 542.234257ms)"],"step_count":2}
	{"level":"warn","ts":"2022-06-01T11:07:44.442Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"637.316849ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/pause-20220601110620-7337\" ","response":"range_response_count:1 size:4578"}
	{"level":"warn","ts":"2022-06-01T11:07:44.442Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-06-01T11:07:43.843Z","time spent":"599.310289ms","remote":"127.0.0.1:48728","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":712,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/events/kube-system/coredns-64897985d-b6g8m.16f47a8272a1f6cf\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/coredns-64897985d-b6g8m.16f47a8272a1f6cf\" value_size:625 lease:3299310171948898683 >> failure:<>"}
	{"level":"info","ts":"2022-06-01T11:07:44.442Z","caller":"traceutil/trace.go:171","msg":"trace[169433543] range","detail":"{range_begin:/registry/minions/pause-20220601110620-7337; range_end:; response_count:1; response_revision:471; }","duration":"637.330503ms","start":"2022-06-01T11:07:43.805Z","end":"2022-06-01T11:07:44.442Z","steps":["trace[169433543] 'agreement among raft nodes before linearized reading'  (duration: 637.269331ms)"],"step_count":1}
	{"level":"warn","ts":"2022-06-01T11:07:44.442Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-06-01T11:07:43.805Z","time spent":"637.462524ms","remote":"127.0.0.1:48748","response type":"/etcdserverpb.KV/Range","request count":0,"request size":45,"response count":1,"response size":4601,"request content":"key:\"/registry/minions/pause-20220601110620-7337\" "}
	{"level":"warn","ts":"2022-06-01T11:07:45.784Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"271.937353ms","expected-duration":"100ms","prefix":"","request":"header:<ID:12522682208803674976 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.50.64\" mod_revision:303 > success:<request_put:<key:\"/registry/masterleases/192.168.50.64\" value_size:68 lease:3299310171948899166 >> failure:<request_range:<key:\"/registry/masterleases/192.168.50.64\" > >>","response":"size:16"}
	{"level":"info","ts":"2022-06-01T11:07:45.785Z","caller":"traceutil/trace.go:171","msg":"trace[561756336] linearizableReadLoop","detail":"{readStateIndex:486; appliedIndex:485; }","duration":"383.551551ms","start":"2022-06-01T11:07:45.401Z","end":"2022-06-01T11:07:45.785Z","steps":["trace[561756336] 'read index received'  (duration: 111.06061ms)","trace[561756336] 'applied index is now lower than readState.Index'  (duration: 272.488067ms)"],"step_count":2}
	{"level":"info","ts":"2022-06-01T11:07:45.785Z","caller":"traceutil/trace.go:171","msg":"trace[1748133338] transaction","detail":"{read_only:false; response_revision:472; number_of_response:1; }","duration":"394.410686ms","start":"2022-06-01T11:07:45.390Z","end":"2022-06-01T11:07:45.785Z","steps":["trace[1748133338] 'process raft request'  (duration: 121.756441ms)","trace[1748133338] 'compare'  (duration: 271.824414ms)"],"step_count":2}
	{"level":"warn","ts":"2022-06-01T11:07:45.785Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-06-01T11:07:45.390Z","time spent":"394.906543ms","remote":"127.0.0.1:48724","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":120,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/masterleases/192.168.50.64\" mod_revision:303 > success:<request_put:<key:\"/registry/masterleases/192.168.50.64\" value_size:68 lease:3299310171948899166 >> failure:<request_range:<key:\"/registry/masterleases/192.168.50.64\" > >"}
	{"level":"warn","ts":"2022-06-01T11:07:45.785Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"383.875733ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2022-06-01T11:07:45.786Z","caller":"traceutil/trace.go:171","msg":"trace[433450755] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:472; }","duration":"384.844591ms","start":"2022-06-01T11:07:45.401Z","end":"2022-06-01T11:07:45.786Z","steps":["trace[433450755] 'agreement among raft nodes before linearized reading'  (duration: 383.825733ms)"],"step_count":1}
	{"level":"warn","ts":"2022-06-01T11:07:45.786Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-06-01T11:07:45.401Z","time spent":"385.028773ms","remote":"127.0.0.1:48848","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2022-06-01T11:07:45.786Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"254.914514ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-64897985d-b6g8m\" ","response":"range_response_count:1 size:4626"}
	{"level":"info","ts":"2022-06-01T11:07:45.787Z","caller":"traceutil/trace.go:171","msg":"trace[1462854947] range","detail":"{range_begin:/registry/pods/kube-system/coredns-64897985d-b6g8m; range_end:; response_count:1; response_revision:472; }","duration":"255.857807ms","start":"2022-06-01T11:07:45.531Z","end":"2022-06-01T11:07:45.787Z","steps":["trace[1462854947] 'agreement among raft nodes before linearized reading'  (duration: 254.620124ms)"],"step_count":1}
	
	* 
	* ==> kernel <==
	*  11:08:34 up 2 min,  0 users,  load average: 1.58, 0.65, 0.24
	Linux pause-20220601110620-7337 4.19.235 #1 SMP Fri May 27 20:55:39 UTC 2022 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [1c5a78f9d34b4a64a73c941808eb4140890ab7b2e83154479eda32adbecd78e2] <==
	* I0601 11:08:17.193319       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0601 11:08:17.199948       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0601 11:08:17.200124       1 shared_informer.go:240] Waiting for caches to sync for crd-autoregister
	I0601 11:08:17.200353       1 dynamic_cafile_content.go:156] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0601 11:08:17.220284       1 dynamic_cafile_content.go:156] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0601 11:08:17.240622       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	E0601 11:08:17.241917       1 controller.go:157] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I0601 11:08:17.245322       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0601 11:08:17.246327       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0601 11:08:17.254083       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0601 11:08:17.265656       1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller 
	I0601 11:08:17.265679       1 cache.go:39] Caches are synced for autoregister controller
	I0601 11:08:17.300671       1 shared_informer.go:247] Caches are synced for crd-autoregister 
	I0601 11:08:17.322886       1 shared_informer.go:247] Caches are synced for node_authorizer 
	I0601 11:08:18.122709       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0601 11:08:18.137365       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0601 11:08:18.163164       1 storage_scheduling.go:109] all system priority classes are created successfully or already exist.
	I0601 11:08:18.702638       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0601 11:08:18.715526       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0601 11:08:18.779780       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0601 11:08:18.812809       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0601 11:08:18.819759       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0601 11:08:19.043263       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0601 11:08:30.219530       1 controller.go:611] quota admission added evaluator for: endpoints
	I0601 11:08:30.451942       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	* 
	* ==> kube-apiserver [56b11b45ac07d6beafde3aaf8283c976fc2b48fe111f9be2f406a2c0d0a3009b] <==
	* I0601 11:07:32.275471       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0601 11:07:39.688249       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0601 11:07:39.719349       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0601 11:07:40.954295       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0601 11:07:43.800418       1 trace.go:205] Trace[1495120344]: "GuaranteedUpdate etcd3" type:*discovery.EndpointSlice (01-Jun-2022 11:07:42.782) (total time: 1018ms):
	Trace[1495120344]: ---"Transaction committed" 1017ms (11:07:43.800)
	Trace[1495120344]: [1.018261732s] [1.018261732s] END
	I0601 11:07:43.801519       1 trace.go:205] Trace[1985781852]: "Update" url:/apis/discovery.k8s.io/v1/namespaces/kube-system/endpointslices/kube-dns-r5ck2,user-agent:kube-controller-manager/v1.23.6 (linux/amd64) kubernetes/ad33385/system:serviceaccount:kube-system:endpointslice-controller,audit-id:8af30ceb-f3cd-4372-9802-2cac9ac89934,client:192.168.50.64,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (01-Jun-2022 11:07:42.781) (total time: 1019ms):
	Trace[1985781852]: ---"Object stored in database" 1019ms (11:07:43.801)
	Trace[1985781852]: [1.019584782s] [1.019584782s] END
	I0601 11:07:43.802884       1 trace.go:205] Trace[1809827139]: "Get" url:/api/v1/namespaces/kube-system/pods/coredns-64897985d-b6g8m,user-agent:minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format,audit-id:e2d9d8f3-dd34-4dd7-8f75-9357e0b36189,client:192.168.50.1,accept:application/json, */*,protocol:HTTP/2.0 (01-Jun-2022 11:07:43.030) (total time: 772ms):
	Trace[1809827139]: ---"About to write a response" 771ms (11:07:43.802)
	Trace[1809827139]: [772.377579ms] [772.377579ms] END
	I0601 11:07:43.803647       1 trace.go:205] Trace[636711164]: "GuaranteedUpdate etcd3" type:*core.Endpoints (01-Jun-2022 11:07:42.788) (total time: 1015ms):
	Trace[636711164]: ---"Transaction committed" 1014ms (11:07:43.803)
	Trace[636711164]: [1.015287574s] [1.015287574s] END
	I0601 11:07:43.804152       1 trace.go:205] Trace[1043678735]: "Update" url:/api/v1/namespaces/kube-system/endpoints/kube-dns,user-agent:kube-controller-manager/v1.23.6 (linux/amd64) kubernetes/ad33385/system:serviceaccount:kube-system:endpoint-controller,audit-id:06cae169-1606-4a25-a337-aed7947e3553,client:192.168.50.64,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (01-Jun-2022 11:07:42.788) (total time: 1015ms):
	Trace[1043678735]: ---"Object stored in database" 1015ms (11:07:43.804)
	Trace[1043678735]: [1.015946768s] [1.015946768s] END
	I0601 11:07:44.445615       1 trace.go:205] Trace[827037909]: "Create" url:/api/v1/namespaces/kube-system/events,user-agent:kubelet/v1.23.6 (linux/amd64) kubernetes/ad33385,audit-id:5a6baec0-28d4-4066-b5ee-09be29ce7efe,client:192.168.50.64,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (01-Jun-2022 11:07:43.841) (total time: 604ms):
	Trace[827037909]: ---"Object stored in database" 603ms (11:07:44.445)
	Trace[827037909]: [604.172133ms] [604.172133ms] END
	I0601 11:07:44.447566       1 trace.go:205] Trace[785115675]: "Get" url:/api/v1/nodes/pause-20220601110620-7337,user-agent:minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format,audit-id:7301f112-b626-416d-b8f2-98a97ad801af,client:192.168.50.1,accept:application/json, */*,protocol:HTTP/2.0 (01-Jun-2022 11:07:43.804) (total time: 642ms):
	Trace[785115675]: ---"About to write a response" 639ms (11:07:44.444)
	Trace[785115675]: [642.659186ms] [642.659186ms] END
	
	* 
	* ==> kube-controller-manager [43e0059e6389236d53b0c17d920ac076fb5f940727f5965cbcce60e344180360] <==
	* I0601 11:08:30.175194       1 shared_informer.go:247] Caches are synced for bootstrap_signer 
	I0601 11:08:30.181441       1 shared_informer.go:247] Caches are synced for endpoint 
	I0601 11:08:30.185152       1 shared_informer.go:247] Caches are synced for TTL 
	I0601 11:08:30.188375       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	I0601 11:08:30.190758       1 shared_informer.go:247] Caches are synced for ephemeral 
	I0601 11:08:30.192346       1 shared_informer.go:247] Caches are synced for ClusterRoleAggregator 
	I0601 11:08:30.194636       1 shared_informer.go:247] Caches are synced for job 
	I0601 11:08:30.196270       1 shared_informer.go:247] Caches are synced for crt configmap 
	I0601 11:08:30.197168       1 shared_informer.go:247] Caches are synced for deployment 
	I0601 11:08:30.218142       1 shared_informer.go:247] Caches are synced for HPA 
	I0601 11:08:30.222097       1 shared_informer.go:247] Caches are synced for disruption 
	I0601 11:08:30.222161       1 disruption.go:371] Sending events to api server.
	I0601 11:08:30.222432       1 shared_informer.go:247] Caches are synced for certificate-csrapproving 
	I0601 11:08:30.226081       1 shared_informer.go:247] Caches are synced for persistent volume 
	I0601 11:08:30.230236       1 shared_informer.go:247] Caches are synced for PVC protection 
	I0601 11:08:30.235467       1 shared_informer.go:247] Caches are synced for ReplicationController 
	I0601 11:08:30.307109       1 shared_informer.go:247] Caches are synced for namespace 
	I0601 11:08:30.370555       1 shared_informer.go:247] Caches are synced for service account 
	I0601 11:08:30.384290       1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring 
	I0601 11:08:30.390137       1 shared_informer.go:247] Caches are synced for resource quota 
	I0601 11:08:30.391384       1 shared_informer.go:247] Caches are synced for resource quota 
	I0601 11:08:30.438107       1 shared_informer.go:247] Caches are synced for endpoint_slice 
	I0601 11:08:30.889644       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0601 11:08:30.891037       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0601 11:08:30.891245       1 garbagecollector.go:155] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	
	* 
	* ==> kube-controller-manager [ad8fa72d1866ad4c1dc86626944739f7227b699d591b5ed6f510390f961b1dd0] <==
	* I0601 11:07:38.837543       1 shared_informer.go:247] Caches are synced for bootstrap_signer 
	I0601 11:07:38.841791       1 shared_informer.go:247] Caches are synced for ReplicaSet 
	I0601 11:07:38.860559       1 shared_informer.go:247] Caches are synced for HPA 
	I0601 11:07:38.860635       1 shared_informer.go:247] Caches are synced for persistent volume 
	I0601 11:07:38.860767       1 shared_informer.go:247] Caches are synced for service account 
	I0601 11:07:38.862438       1 shared_informer.go:247] Caches are synced for deployment 
	I0601 11:07:38.863337       1 shared_informer.go:247] Caches are synced for ClusterRoleAggregator 
	I0601 11:07:38.982571       1 shared_informer.go:247] Caches are synced for resource quota 
	I0601 11:07:39.009109       1 shared_informer.go:247] Caches are synced for daemon sets 
	I0601 11:07:39.022314       1 shared_informer.go:247] Caches are synced for resource quota 
	I0601 11:07:39.066468       1 shared_informer.go:247] Caches are synced for taint 
	I0601 11:07:39.066686       1 node_lifecycle_controller.go:1397] Initializing eviction metric for zone: 
	W0601 11:07:39.066810       1 node_lifecycle_controller.go:1012] Missing timestamp for Node pause-20220601110620-7337. Assuming now as a timestamp.
	I0601 11:07:39.066850       1 node_lifecycle_controller.go:1213] Controller detected that zone  is now in state Normal.
	I0601 11:07:39.067376       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0601 11:07:39.067916       1 event.go:294] "Event occurred" object="pause-20220601110620-7337" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-20220601110620-7337 event: Registered Node pause-20220601110620-7337 in Controller"
	I0601 11:07:39.490516       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0601 11:07:39.508249       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0601 11:07:39.508305       1 garbagecollector.go:155] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0601 11:07:39.696816       1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-64897985d to 2"
	I0601 11:07:39.736494       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-khg8x"
	I0601 11:07:39.778525       1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-64897985d to 1"
	I0601 11:07:39.818490       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-b6g8m"
	I0601 11:07:39.827919       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-cfd9b"
	I0601 11:07:39.873625       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-64897985d-b6g8m"
	
	* 
	* ==> kube-proxy [6e9d7e184abed4789b1f1d5e9279f2e6e10c04b7c1f2c361b24609a47937900c] <==
	* I0601 11:07:40.755419       1 node.go:163] Successfully retrieved node IP: 192.168.50.64
	I0601 11:07:40.755468       1 server_others.go:138] "Detected node IP" address="192.168.50.64"
	I0601 11:07:40.755538       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0601 11:07:40.905152       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0601 11:07:40.905196       1 server_others.go:206] "Using iptables Proxier"
	I0601 11:07:40.941277       1 server.go:656] "Version info" version="v1.23.6"
	I0601 11:07:40.946104       1 config.go:317] "Starting service config controller"
	I0601 11:07:40.946121       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0601 11:07:40.946147       1 config.go:226] "Starting endpoint slice config controller"
	I0601 11:07:40.946152       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0601 11:07:41.069829       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0601 11:07:41.095590       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-proxy [7faa94931318263b7fb674322582984fa4ba2d560fc1092bbf6b47a1a27ca6a2] <==
	* I0601 11:08:18.993468       1 node.go:163] Successfully retrieved node IP: 192.168.50.64
	I0601 11:08:18.993536       1 server_others.go:138] "Detected node IP" address="192.168.50.64"
	I0601 11:08:18.993564       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0601 11:08:19.038107       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0601 11:08:19.038152       1 server_others.go:206] "Using iptables Proxier"
	I0601 11:08:19.038453       1 server.go:656] "Version info" version="v1.23.6"
	I0601 11:08:19.039257       1 config.go:226] "Starting endpoint slice config controller"
	I0601 11:08:19.039308       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0601 11:08:19.039365       1 config.go:317] "Starting service config controller"
	I0601 11:08:19.039397       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0601 11:08:19.140186       1 shared_informer.go:247] Caches are synced for service config 
	I0601 11:08:19.140276       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	* 
	* ==> kube-scheduler [5b9eea7e9f630b4f732f8810f7ecbfacf550b07152d3c2ec94cb2a7d2f311190] <==
	* W0601 11:07:23.728209       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0601 11:07:23.728772       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0601 11:07:23.729127       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0601 11:07:23.729589       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0601 11:07:24.580109       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0601 11:07:24.580361       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0601 11:07:24.671793       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0601 11:07:24.671863       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0601 11:07:24.757030       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0601 11:07:24.757080       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0601 11:07:24.764925       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0601 11:07:24.765276       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0601 11:07:24.798305       1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0601 11:07:24.798365       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0601 11:07:24.828452       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0601 11:07:24.828502       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0601 11:07:24.850143       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0601 11:07:24.850191       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0601 11:07:24.889871       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0601 11:07:24.889923       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0601 11:07:24.917240       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0601 11:07:24.917289       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0601 11:07:24.948844       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0601 11:07:24.948898       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0601 11:07:26.918880       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kube-scheduler [a093596174772dc021711949d956abddf61b8ab0e6aa809b74482f373e0b6f69] <==
	* I0601 11:08:15.116305       1 serving.go:348] Generated self-signed cert in-memory
	W0601 11:08:17.177658       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0601 11:08:17.177682       1 authentication.go:345] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0601 11:08:17.177769       1 authentication.go:346] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0601 11:08:17.177774       1 authentication.go:347] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0601 11:08:17.267601       1 server.go:139] "Starting Kubernetes Scheduler" version="v1.23.6"
	I0601 11:08:17.270560       1 secure_serving.go:200] Serving securely on 127.0.0.1:10259
	I0601 11:08:17.270741       1 configmap_cafile_content.go:201] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0601 11:08:17.270765       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0601 11:08:17.270814       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0601 11:08:17.373893       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Wed 2022-06-01 11:06:38 UTC, ends at Wed 2022-06-01 11:08:35 UTC. --
	Jun 01 11:08:16 pause-20220601110620-7337 kubelet[4285]: E0601 11:08:16.419084    4285 kubelet.go:2461] "Error getting node" err="node \"pause-20220601110620-7337\" not found"
	Jun 01 11:08:16 pause-20220601110620-7337 kubelet[4285]: E0601 11:08:16.520079    4285 kubelet.go:2461] "Error getting node" err="node \"pause-20220601110620-7337\" not found"
	Jun 01 11:08:16 pause-20220601110620-7337 kubelet[4285]: E0601 11:08:16.621024    4285 kubelet.go:2461] "Error getting node" err="node \"pause-20220601110620-7337\" not found"
	Jun 01 11:08:16 pause-20220601110620-7337 kubelet[4285]: E0601 11:08:16.722066    4285 kubelet.go:2461] "Error getting node" err="node \"pause-20220601110620-7337\" not found"
	Jun 01 11:08:16 pause-20220601110620-7337 kubelet[4285]: E0601 11:08:16.823210    4285 kubelet.go:2461] "Error getting node" err="node \"pause-20220601110620-7337\" not found"
	Jun 01 11:08:16 pause-20220601110620-7337 kubelet[4285]: E0601 11:08:16.923587    4285 kubelet.go:2461] "Error getting node" err="node \"pause-20220601110620-7337\" not found"
	Jun 01 11:08:17 pause-20220601110620-7337 kubelet[4285]: E0601 11:08:17.024564    4285 kubelet.go:2461] "Error getting node" err="node \"pause-20220601110620-7337\" not found"
	Jun 01 11:08:17 pause-20220601110620-7337 kubelet[4285]: E0601 11:08:17.125785    4285 kubelet.go:2461] "Error getting node" err="node \"pause-20220601110620-7337\" not found"
	Jun 01 11:08:17 pause-20220601110620-7337 kubelet[4285]: I0601 11:08:17.227138    4285 kuberuntime_manager.go:1105] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jun 01 11:08:17 pause-20220601110620-7337 kubelet[4285]: I0601 11:08:17.228534    4285 kubelet_network.go:76] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jun 01 11:08:17 pause-20220601110620-7337 kubelet[4285]: I0601 11:08:17.326219    4285 kubelet_node_status.go:108] "Node was previously registered" node="pause-20220601110620-7337"
	Jun 01 11:08:17 pause-20220601110620-7337 kubelet[4285]: I0601 11:08:17.326421    4285 kubelet_node_status.go:73] "Successfully registered node" node="pause-20220601110620-7337"
	Jun 01 11:08:17 pause-20220601110620-7337 kubelet[4285]: I0601 11:08:17.939835    4285 apiserver.go:52] "Watching apiserver"
	Jun 01 11:08:17 pause-20220601110620-7337 kubelet[4285]: I0601 11:08:17.943164    4285 topology_manager.go:200] "Topology Admit Handler"
	Jun 01 11:08:17 pause-20220601110620-7337 kubelet[4285]: I0601 11:08:17.943289    4285 topology_manager.go:200] "Topology Admit Handler"
	Jun 01 11:08:18 pause-20220601110620-7337 kubelet[4285]: I0601 11:08:18.034657    4285 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/57bb2264-4bf6-4bf6-8d33-a600f8a192a4-xtables-lock\") pod \"kube-proxy-khg8x\" (UID: \"57bb2264-4bf6-4bf6-8d33-a600f8a192a4\") " pod="kube-system/kube-proxy-khg8x"
	Jun 01 11:08:18 pause-20220601110620-7337 kubelet[4285]: I0601 11:08:18.034735    4285 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/57bb2264-4bf6-4bf6-8d33-a600f8a192a4-lib-modules\") pod \"kube-proxy-khg8x\" (UID: \"57bb2264-4bf6-4bf6-8d33-a600f8a192a4\") " pod="kube-system/kube-proxy-khg8x"
	Jun 01 11:08:18 pause-20220601110620-7337 kubelet[4285]: I0601 11:08:18.034762    4285 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8dm2\" (UniqueName: \"kubernetes.io/projected/57bb2264-4bf6-4bf6-8d33-a600f8a192a4-kube-api-access-k8dm2\") pod \"kube-proxy-khg8x\" (UID: \"57bb2264-4bf6-4bf6-8d33-a600f8a192a4\") " pod="kube-system/kube-proxy-khg8x"
	Jun 01 11:08:18 pause-20220601110620-7337 kubelet[4285]: I0601 11:08:18.034786    4285 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/33da1e8b-2c7a-4988-9dfe-3162061c879e-config-volume\") pod \"coredns-64897985d-cfd9b\" (UID: \"33da1e8b-2c7a-4988-9dfe-3162061c879e\") " pod="kube-system/coredns-64897985d-cfd9b"
	Jun 01 11:08:18 pause-20220601110620-7337 kubelet[4285]: I0601 11:08:18.034804    4285 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/57bb2264-4bf6-4bf6-8d33-a600f8a192a4-kube-proxy\") pod \"kube-proxy-khg8x\" (UID: \"57bb2264-4bf6-4bf6-8d33-a600f8a192a4\") " pod="kube-system/kube-proxy-khg8x"
	Jun 01 11:08:18 pause-20220601110620-7337 kubelet[4285]: I0601 11:08:18.034824    4285 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z58fl\" (UniqueName: \"kubernetes.io/projected/33da1e8b-2c7a-4988-9dfe-3162061c879e-kube-api-access-z58fl\") pod \"coredns-64897985d-cfd9b\" (UID: \"33da1e8b-2c7a-4988-9dfe-3162061c879e\") " pod="kube-system/coredns-64897985d-cfd9b"
	Jun 01 11:08:18 pause-20220601110620-7337 kubelet[4285]: I0601 11:08:18.034832    4285 reconciler.go:157] "Reconciler: start to sync state"
	Jun 01 11:08:32 pause-20220601110620-7337 kubelet[4285]: I0601 11:08:32.522902    4285 topology_manager.go:200] "Topology Admit Handler"
	Jun 01 11:08:32 pause-20220601110620-7337 kubelet[4285]: I0601 11:08:32.658747    4285 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/f30ddd6b-0b90-4a3a-88d9-ea548cf1fb27-tmp\") pod \"storage-provisioner\" (UID: \"f30ddd6b-0b90-4a3a-88d9-ea548cf1fb27\") " pod="kube-system/storage-provisioner"
	Jun 01 11:08:32 pause-20220601110620-7337 kubelet[4285]: I0601 11:08:32.658930    4285 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4j8k\" (UniqueName: \"kubernetes.io/projected/f30ddd6b-0b90-4a3a-88d9-ea548cf1fb27-kube-api-access-j4j8k\") pod \"storage-provisioner\" (UID: \"f30ddd6b-0b90-4a3a-88d9-ea548cf1fb27\") " pod="kube-system/storage-provisioner"
	
	* 
	* ==> storage-provisioner [4fecce19f81a84509183048e804a408cf72b6e089e3c52436d0d708b223d1260] <==
	* I0601 11:08:33.463848       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0601 11:08:33.498348       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0601 11:08:33.499236       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0601 11:08:33.513431       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0601 11:08:33.513756       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_pause-20220601110620-7337_3e979037-e188-4041-ba26-ffaad90b4b1d!
	I0601 11:08:33.516413       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ad23f3b0-dcbd-4dd9-83a3-2484c24c9c05", APIVersion:"v1", ResourceVersion:"580", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' pause-20220601110620-7337_3e979037-e188-4041-ba26-ffaad90b4b1d became leader
	I0601 11:08:33.614408       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_pause-20220601110620-7337_3e979037-e188-4041-ba26-ffaad90b4b1d!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-20220601110620-7337 -n pause-20220601110620-7337
helpers_test.go:261: (dbg) Run:  kubectl --context pause-20220601110620-7337 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: 
helpers_test.go:272: ======> post-mortem[TestPause/serial/SecondStartNoReconfiguration]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context pause-20220601110620-7337 describe pod 
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context pause-20220601110620-7337 describe pod : exit status 1 (43.11564ms)

                                                
                                                
** stderr ** 
	error: resource name may not be empty

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context pause-20220601110620-7337 describe pod : exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-20220601110620-7337 -n pause-20220601110620-7337
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-20220601110620-7337 logs -n 25

                                                
                                                
=== CONT  TestPause/serial/SecondStartNoReconfiguration
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-20220601110620-7337 logs -n 25: (1.435596742s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|----------------------------------------|----------------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                  Args                  |                Profile                 |  User   |    Version     |     Start Time      |      End Time       |
	|---------|----------------------------------------|----------------------------------------|---------|----------------|---------------------|---------------------|
	| ssh     | -p                                     | test-preload-20220601105919-7337       | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:01 UTC | 01 Jun 22 11:01 UTC |
	|         | test-preload-20220601105919-7337       |                                        |         |                |                     |                     |
	|         | -- sudo crictl pull                    |                                        |         |                |                     |                     |
	|         | gcr.io/k8s-minikube/busybox            |                                        |         |                |                     |                     |
	| start   | -p                                     | test-preload-20220601105919-7337       | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:01 UTC | 01 Jun 22 11:02 UTC |
	|         | test-preload-20220601105919-7337       |                                        |         |                |                     |                     |
	|         | --memory=2200 --alsologtostderr        |                                        |         |                |                     |                     |
	|         | -v=1 --wait=true --driver=kvm2         |                                        |         |                |                     |                     |
	|         |  --container-runtime=containerd        |                                        |         |                |                     |                     |
	|         | --kubernetes-version=v1.17.3           |                                        |         |                |                     |                     |
	| ssh     | -p                                     | test-preload-20220601105919-7337       | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:02 UTC | 01 Jun 22 11:02 UTC |
	|         | test-preload-20220601105919-7337       |                                        |         |                |                     |                     |
	|         | -- sudo crictl image ls                |                                        |         |                |                     |                     |
	| delete  | -p                                     | test-preload-20220601105919-7337       | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:02 UTC | 01 Jun 22 11:02 UTC |
	|         | test-preload-20220601105919-7337       |                                        |         |                |                     |                     |
	| start   | -p                                     | scheduled-stop-20220601110214-7337     | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:02 UTC | 01 Jun 22 11:03 UTC |
	|         | scheduled-stop-20220601110214-7337     |                                        |         |                |                     |                     |
	|         | --memory=2048 --driver=kvm2            |                                        |         |                |                     |                     |
	|         | --container-runtime=containerd         |                                        |         |                |                     |                     |
	| stop    | -p                                     | scheduled-stop-20220601110214-7337     | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:03 UTC | 01 Jun 22 11:03 UTC |
	|         | scheduled-stop-20220601110214-7337     |                                        |         |                |                     |                     |
	|         | --cancel-scheduled                     |                                        |         |                |                     |                     |
	| stop    | -p                                     | scheduled-stop-20220601110214-7337     | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:03 UTC | 01 Jun 22 11:03 UTC |
	|         | scheduled-stop-20220601110214-7337     |                                        |         |                |                     |                     |
	|         | --schedule 15s                         |                                        |         |                |                     |                     |
	| delete  | -p                                     | scheduled-stop-20220601110214-7337     | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:04 UTC | 01 Jun 22 11:04 UTC |
	|         | scheduled-stop-20220601110214-7337     |                                        |         |                |                     |                     |
	| start   | -p                                     | kubernetes-upgrade-20220601110426-7337 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:04 UTC | 01 Jun 22 11:06 UTC |
	|         | kubernetes-upgrade-20220601110426-7337 |                                        |         |                |                     |                     |
	|         | --memory=2200                          |                                        |         |                |                     |                     |
	|         | --kubernetes-version=v1.16.0           |                                        |         |                |                     |                     |
	|         | --alsologtostderr -v=1 --driver=kvm2   |                                        |         |                |                     |                     |
	|         | --container-runtime=containerd         |                                        |         |                |                     |                     |
	| stop    | -p                                     | kubernetes-upgrade-20220601110426-7337 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:06 UTC | 01 Jun 22 11:06 UTC |
	|         | kubernetes-upgrade-20220601110426-7337 |                                        |         |                |                     |                     |
	| start   | -p                                     | offline-containerd-20220601110426-7337 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:04 UTC | 01 Jun 22 11:06 UTC |
	|         | offline-containerd-20220601110426-7337 |                                        |         |                |                     |                     |
	|         | --alsologtostderr -v=1 --memory=2048   |                                        |         |                |                     |                     |
	|         | --wait=true --driver=kvm2              |                                        |         |                |                     |                     |
	|         | --container-runtime=containerd         |                                        |         |                |                     |                     |
	| delete  | -p                                     | offline-containerd-20220601110426-7337 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:06 UTC | 01 Jun 22 11:06 UTC |
	|         | offline-containerd-20220601110426-7337 |                                        |         |                |                     |                     |
	| start   | -p                                     | running-upgrade-20220601110426-7337    | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:06 UTC | 01 Jun 22 11:07 UTC |
	|         | running-upgrade-20220601110426-7337    |                                        |         |                |                     |                     |
	|         | --memory=2200 --alsologtostderr        |                                        |         |                |                     |                     |
	|         | -v=1 --driver=kvm2                     |                                        |         |                |                     |                     |
	|         | --container-runtime=containerd         |                                        |         |                |                     |                     |
	| delete  | -p                                     | running-upgrade-20220601110426-7337    | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:07 UTC | 01 Jun 22 11:07 UTC |
	|         | running-upgrade-20220601110426-7337    |                                        |         |                |                     |                     |
	| start   | -p pause-20220601110620-7337           | pause-20220601110620-7337              | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:06 UTC | 01 Jun 22 11:07 UTC |
	|         | --memory=2048                          |                                        |         |                |                     |                     |
	|         | --install-addons=false                 |                                        |         |                |                     |                     |
	|         | --wait=all --driver=kvm2               |                                        |         |                |                     |                     |
	|         | --container-runtime=containerd         |                                        |         |                |                     |                     |
	| start   | -p                                     | kubernetes-upgrade-20220601110426-7337 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:06 UTC | 01 Jun 22 11:08 UTC |
	|         | kubernetes-upgrade-20220601110426-7337 |                                        |         |                |                     |                     |
	|         | --memory=2200                          |                                        |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6           |                                        |         |                |                     |                     |
	|         | --alsologtostderr -v=1 --driver=kvm2   |                                        |         |                |                     |                     |
	|         | --container-runtime=containerd         |                                        |         |                |                     |                     |
	| start   | -p                                     | NoKubernetes-20220601110707-7337       | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:07 UTC | 01 Jun 22 11:08 UTC |
	|         | NoKubernetes-20220601110707-7337       |                                        |         |                |                     |                     |
	|         | --driver=kvm2                          |                                        |         |                |                     |                     |
	|         | --container-runtime=containerd         |                                        |         |                |                     |                     |
	| start   | -p                                     | NoKubernetes-20220601110707-7337       | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:08 UTC | 01 Jun 22 11:08 UTC |
	|         | NoKubernetes-20220601110707-7337       |                                        |         |                |                     |                     |
	|         | --no-kubernetes --driver=kvm2          |                                        |         |                |                     |                     |
	|         | --container-runtime=containerd         |                                        |         |                |                     |                     |
	| delete  | -p                                     | NoKubernetes-20220601110707-7337       | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:08 UTC | 01 Jun 22 11:08 UTC |
	|         | NoKubernetes-20220601110707-7337       |                                        |         |                |                     |                     |
	| start   | -p                                     | kubernetes-upgrade-20220601110426-7337 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:08 UTC | 01 Jun 22 11:08 UTC |
	|         | kubernetes-upgrade-20220601110426-7337 |                                        |         |                |                     |                     |
	|         | --memory=2200                          |                                        |         |                |                     |                     |
	|         | --kubernetes-version=v1.23.6           |                                        |         |                |                     |                     |
	|         | --alsologtostderr -v=1 --driver=kvm2   |                                        |         |                |                     |                     |
	|         | --container-runtime=containerd         |                                        |         |                |                     |                     |
	| delete  | -p                                     | kubernetes-upgrade-20220601110426-7337 | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:08 UTC | 01 Jun 22 11:08 UTC |
	|         | kubernetes-upgrade-20220601110426-7337 |                                        |         |                |                     |                     |
	| delete  | -p kubenet-20220601110831-7337         | kubenet-20220601110831-7337            | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:08 UTC | 01 Jun 22 11:08 UTC |
	| delete  | -p false-20220601110831-7337           | false-20220601110831-7337              | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:08 UTC | 01 Jun 22 11:08 UTC |
	| start   | -p pause-20220601110620-7337           | pause-20220601110620-7337              | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:07 UTC | 01 Jun 22 11:08 UTC |
	|         | --alsologtostderr                      |                                        |         |                |                     |                     |
	|         | -v=1 --driver=kvm2                     |                                        |         |                |                     |                     |
	|         | --container-runtime=containerd         |                                        |         |                |                     |                     |
	| logs    | pause-20220601110620-7337 logs         | pause-20220601110620-7337              | jenkins | v1.26.0-beta.1 | 01 Jun 22 11:08 UTC | 01 Jun 22 11:08 UTC |
	|         | -n 25                                  |                                        |         |                |                     |                     |
	|---------|----------------------------------------|----------------------------------------|---------|----------------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/06/01 11:08:31
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.18.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0601 11:08:31.958466   24244 out.go:296] Setting OutFile to fd 1 ...
	I0601 11:08:31.958668   24244 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:08:31.958681   24244 out.go:309] Setting ErrFile to fd 2...
	I0601 11:08:31.958687   24244 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:08:31.958838   24244 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/bin
	I0601 11:08:31.959203   24244 out.go:303] Setting JSON to false
	I0601 11:08:31.960407   24244 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3066,"bootTime":1654078646,"procs":273,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1027-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0601 11:08:31.960484   24244 start.go:125] virtualization: kvm guest
	I0601 11:08:31.963463   24244 out.go:177] * [false-20220601110831-7337] minikube v1.26.0-beta.1 on Ubuntu 20.04 (kvm/amd64)
	I0601 11:08:31.965309   24244 out.go:177]   - MINIKUBE_LOCATION=14079
	I0601 11:08:31.965282   24244 notify.go:193] Checking for updates...
	I0601 11:08:31.966865   24244 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0601 11:08:31.968377   24244 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 11:08:31.969848   24244 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube
	I0601 11:08:31.971298   24244 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0601 11:08:31.973181   24244 config.go:178] Loaded profile config "NoKubernetes-20220601110707-7337": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v0.0.0
	I0601 11:08:31.973327   24244 config.go:178] Loaded profile config "pause-20220601110620-7337": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0601 11:08:31.973443   24244 config.go:178] Loaded profile config "stopped-upgrade-20220601110426-7337": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0601 11:08:31.973502   24244 driver.go:358] Setting default libvirt URI to qemu:///system
	I0601 11:08:32.015007   24244 out.go:177] * Using the kvm2 driver based on user configuration
	I0601 11:08:32.016699   24244 start.go:284] selected driver: kvm2
	I0601 11:08:32.016712   24244 start.go:806] validating driver "kvm2" against <nil>
	I0601 11:08:32.016726   24244 start.go:817] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0601 11:08:32.018660   24244 out.go:177] 
	W0601 11:08:32.019825   24244 out.go:239] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0601 11:08:32.021158   24244 out.go:177] 
	I0601 11:08:31.498902   23498 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0601 11:08:31.498921   23498 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0601 11:08:31.498940   23498 main.go:134] libmachine: (pause-20220601110620-7337) Calling .GetSSHHostname
	I0601 11:08:31.502387   23498 main.go:134] libmachine: (pause-20220601110620-7337) DBG | domain pause-20220601110620-7337 has defined MAC address 52:54:00:40:c6:ea in network mk-pause-20220601110620-7337
	I0601 11:08:31.502838   23498 main.go:134] libmachine: (pause-20220601110620-7337) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:c6:ea", ip: ""} in network mk-pause-20220601110620-7337: {Iface:virbr5 ExpiryTime:2022-06-01 12:06:42 +0000 UTC Type:0 Mac:52:54:00:40:c6:ea Iaid: IPaddr:192.168.50.64 Prefix:24 Hostname:pause-20220601110620-7337 Clientid:01:52:54:00:40:c6:ea}
	I0601 11:08:31.502866   23498 main.go:134] libmachine: (pause-20220601110620-7337) DBG | domain pause-20220601110620-7337 has defined IP address 192.168.50.64 and MAC address 52:54:00:40:c6:ea in network mk-pause-20220601110620-7337
	I0601 11:08:31.503109   23498 main.go:134] libmachine: (pause-20220601110620-7337) Calling .GetSSHPort
	I0601 11:08:31.503287   23498 main.go:134] libmachine: (pause-20220601110620-7337) Calling .GetSSHKeyPath
	I0601 11:08:31.503444   23498 main.go:134] libmachine: (pause-20220601110620-7337) Calling .GetSSHUsername
	I0601 11:08:31.503609   23498 sshutil.go:53] new ssh client: &{IP:192.168.50.64 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/pause-20220601110620-7337/id_rsa Username:docker}
	I0601 11:08:31.511645   23498 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:35887
	I0601 11:08:31.512011   23498 main.go:134] libmachine: () Calling .GetVersion
	I0601 11:08:31.512532   23498 main.go:134] libmachine: Using API Version  1
	I0601 11:08:31.512556   23498 main.go:134] libmachine: () Calling .SetConfigRaw
	I0601 11:08:31.512880   23498 main.go:134] libmachine: () Calling .GetMachineName
	I0601 11:08:31.513059   23498 main.go:134] libmachine: (pause-20220601110620-7337) Calling .GetState
	I0601 11:08:31.514573   23498 main.go:134] libmachine: (pause-20220601110620-7337) Calling .DriverName
	I0601 11:08:31.514795   23498 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0601 11:08:31.514814   23498 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0601 11:08:31.514831   23498 main.go:134] libmachine: (pause-20220601110620-7337) Calling .GetSSHHostname
	I0601 11:08:31.517470   23498 main.go:134] libmachine: (pause-20220601110620-7337) DBG | domain pause-20220601110620-7337 has defined MAC address 52:54:00:40:c6:ea in network mk-pause-20220601110620-7337
	I0601 11:08:31.517931   23498 main.go:134] libmachine: (pause-20220601110620-7337) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:c6:ea", ip: ""} in network mk-pause-20220601110620-7337: {Iface:virbr5 ExpiryTime:2022-06-01 12:06:42 +0000 UTC Type:0 Mac:52:54:00:40:c6:ea Iaid: IPaddr:192.168.50.64 Prefix:24 Hostname:pause-20220601110620-7337 Clientid:01:52:54:00:40:c6:ea}
	I0601 11:08:31.517969   23498 main.go:134] libmachine: (pause-20220601110620-7337) DBG | domain pause-20220601110620-7337 has defined IP address 192.168.50.64 and MAC address 52:54:00:40:c6:ea in network mk-pause-20220601110620-7337
	I0601 11:08:31.518084   23498 main.go:134] libmachine: (pause-20220601110620-7337) Calling .GetSSHPort
	I0601 11:08:31.518240   23498 main.go:134] libmachine: (pause-20220601110620-7337) Calling .GetSSHKeyPath
	I0601 11:08:31.518380   23498 main.go:134] libmachine: (pause-20220601110620-7337) Calling .GetSSHUsername
	I0601 11:08:31.518533   23498 sshutil.go:53] new ssh client: &{IP:192.168.50.64 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/pause-20220601110620-7337/id_rsa Username:docker}
	I0601 11:08:31.560320   23498 node_ready.go:35] waiting up to 6m0s for node "pause-20220601110620-7337" to be "Ready" ...
	I0601 11:08:31.560350   23498 start.go:786] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0601 11:08:31.563697   23498 node_ready.go:49] node "pause-20220601110620-7337" has status "Ready":"True"
	I0601 11:08:31.563712   23498 node_ready.go:38] duration metric: took 3.35893ms waiting for node "pause-20220601110620-7337" to be "Ready" ...
	I0601 11:08:31.563719   23498 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0601 11:08:31.570259   23498 pod_ready.go:78] waiting up to 6m0s for pod "coredns-64897985d-cfd9b" in "kube-system" namespace to be "Ready" ...
	I0601 11:08:31.576283   23498 pod_ready.go:92] pod "coredns-64897985d-cfd9b" in "kube-system" namespace has status "Ready":"True"
	I0601 11:08:31.576305   23498 pod_ready.go:81] duration metric: took 6.020923ms waiting for pod "coredns-64897985d-cfd9b" in "kube-system" namespace to be "Ready" ...
	I0601 11:08:31.576317   23498 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-20220601110620-7337" in "kube-system" namespace to be "Ready" ...
	I0601 11:08:31.583863   23498 pod_ready.go:92] pod "etcd-pause-20220601110620-7337" in "kube-system" namespace has status "Ready":"True"
	I0601 11:08:31.583882   23498 pod_ready.go:81] duration metric: took 7.5573ms waiting for pod "etcd-pause-20220601110620-7337" in "kube-system" namespace to be "Ready" ...
	I0601 11:08:31.583893   23498 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-20220601110620-7337" in "kube-system" namespace to be "Ready" ...
	I0601 11:08:31.589762   23498 pod_ready.go:92] pod "kube-apiserver-pause-20220601110620-7337" in "kube-system" namespace has status "Ready":"True"
	I0601 11:08:31.589779   23498 pod_ready.go:81] duration metric: took 5.878861ms waiting for pod "kube-apiserver-pause-20220601110620-7337" in "kube-system" namespace to be "Ready" ...
	I0601 11:08:31.589790   23498 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-20220601110620-7337" in "kube-system" namespace to be "Ready" ...
	I0601 11:08:31.635132   23498 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0601 11:08:31.647990   23498 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0601 11:08:31.985730   23498 pod_ready.go:92] pod "kube-controller-manager-pause-20220601110620-7337" in "kube-system" namespace has status "Ready":"True"
	I0601 11:08:31.985753   23498 pod_ready.go:81] duration metric: took 395.955579ms waiting for pod "kube-controller-manager-pause-20220601110620-7337" in "kube-system" namespace to be "Ready" ...
	I0601 11:08:31.985766   23498 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-khg8x" in "kube-system" namespace to be "Ready" ...
	I0601 11:08:32.376123   23498 pod_ready.go:92] pod "kube-proxy-khg8x" in "kube-system" namespace has status "Ready":"True"
	I0601 11:08:32.376152   23498 pod_ready.go:81] duration metric: took 390.378008ms waiting for pod "kube-proxy-khg8x" in "kube-system" namespace to be "Ready" ...
	I0601 11:08:32.376164   23498 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-20220601110620-7337" in "kube-system" namespace to be "Ready" ...
	I0601 11:08:32.520036   23498 main.go:134] libmachine: Making call to close driver server
	I0601 11:08:32.520070   23498 main.go:134] libmachine: (pause-20220601110620-7337) Calling .Close
	I0601 11:08:32.520147   23498 main.go:134] libmachine: Making call to close driver server
	I0601 11:08:32.520178   23498 main.go:134] libmachine: (pause-20220601110620-7337) Calling .Close
	I0601 11:08:32.520358   23498 main.go:134] libmachine: Successfully made call to close driver server
	I0601 11:08:32.520373   23498 main.go:134] libmachine: Making call to close connection to plugin binary
	I0601 11:08:32.520384   23498 main.go:134] libmachine: Making call to close driver server
	I0601 11:08:32.520393   23498 main.go:134] libmachine: (pause-20220601110620-7337) Calling .Close
	I0601 11:08:32.522053   23498 main.go:134] libmachine: (pause-20220601110620-7337) DBG | Closing plugin on server side
	I0601 11:08:32.522070   23498 main.go:134] libmachine: (pause-20220601110620-7337) DBG | Closing plugin on server side
	I0601 11:08:32.522076   23498 main.go:134] libmachine: Successfully made call to close driver server
	I0601 11:08:32.522090   23498 main.go:134] libmachine: Making call to close connection to plugin binary
	I0601 11:08:32.522092   23498 main.go:134] libmachine: Successfully made call to close driver server
	I0601 11:08:32.522122   23498 main.go:134] libmachine: Making call to close driver server
	I0601 11:08:32.522138   23498 main.go:134] libmachine: (pause-20220601110620-7337) Calling .Close
	I0601 11:08:32.522169   23498 main.go:134] libmachine: Making call to close connection to plugin binary
	I0601 11:08:32.522195   23498 main.go:134] libmachine: Making call to close driver server
	I0601 11:08:32.522205   23498 main.go:134] libmachine: (pause-20220601110620-7337) Calling .Close
	I0601 11:08:32.522431   23498 main.go:134] libmachine: Successfully made call to close driver server
	I0601 11:08:32.522453   23498 main.go:134] libmachine: Making call to close connection to plugin binary
	I0601 11:08:32.523636   23498 main.go:134] libmachine: Successfully made call to close driver server
	I0601 11:08:32.523656   23498 main.go:134] libmachine: Making call to close connection to plugin binary
	I0601 11:08:32.523641   23498 main.go:134] libmachine: (pause-20220601110620-7337) DBG | Closing plugin on server side
	I0601 11:08:32.526660   23498 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0601 11:08:32.528354   23498 addons.go:417] enableAddons completed in 1.080226946s
	I0601 11:08:32.777023   23498 pod_ready.go:92] pod "kube-scheduler-pause-20220601110620-7337" in "kube-system" namespace has status "Ready":"True"
	I0601 11:08:32.777042   23498 pod_ready.go:81] duration metric: took 400.868139ms waiting for pod "kube-scheduler-pause-20220601110620-7337" in "kube-system" namespace to be "Ready" ...
	I0601 11:08:32.777051   23498 pod_ready.go:38] duration metric: took 1.213323943s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0601 11:08:32.777070   23498 api_server.go:51] waiting for apiserver process to appear ...
	I0601 11:08:32.777106   23498 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 11:08:32.793773   23498 api_server.go:71] duration metric: took 1.345711792s to wait for apiserver process to appear ...
	I0601 11:08:32.793795   23498 api_server.go:87] waiting for apiserver healthz status ...
	I0601 11:08:32.793807   23498 api_server.go:240] Checking apiserver healthz at https://192.168.50.64:8443/healthz ...
	I0601 11:08:32.798772   23498 api_server.go:266] https://192.168.50.64:8443/healthz returned 200:
	ok
	I0601 11:08:32.799783   23498 api_server.go:140] control plane version: v1.23.6
	I0601 11:08:32.799801   23498 api_server.go:130] duration metric: took 6.00013ms to wait for apiserver health ...
	I0601 11:08:32.799811   23498 system_pods.go:43] waiting for kube-system pods to appear ...
	I0601 11:08:32.983147   23498 system_pods.go:59] 7 kube-system pods found
	I0601 11:08:32.983196   23498 system_pods.go:61] "coredns-64897985d-cfd9b" [33da1e8b-2c7a-4988-9dfe-3162061c879e] Running
	I0601 11:08:32.983209   23498 system_pods.go:61] "etcd-pause-20220601110620-7337" [d8fa1287-1138-46bd-ab96-01cf4324fd0a] Running
	I0601 11:08:32.983217   23498 system_pods.go:61] "kube-apiserver-pause-20220601110620-7337" [c09b0b2b-da78-4b5f-98d4-471f3ecfc3c1] Running
	I0601 11:08:32.983224   23498 system_pods.go:61] "kube-controller-manager-pause-20220601110620-7337" [08519271-d393-4c31-b3b5-166bddc4c3ca] Running
	I0601 11:08:32.983230   23498 system_pods.go:61] "kube-proxy-khg8x" [57bb2264-4bf6-4bf6-8d33-a600f8a192a4] Running
	I0601 11:08:32.983238   23498 system_pods.go:61] "kube-scheduler-pause-20220601110620-7337" [34099a13-3eb0-49fe-a4cc-721b2e7a9159] Running
	I0601 11:08:32.983253   23498 system_pods.go:61] "storage-provisioner" [f30ddd6b-0b90-4a3a-88d9-ea548cf1fb27] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0601 11:08:32.983266   23498 system_pods.go:74] duration metric: took 183.448959ms to wait for pod list to return data ...
	I0601 11:08:32.983276   23498 default_sa.go:34] waiting for default service account to be created ...
	I0601 11:08:33.174774   23498 default_sa.go:45] found service account: "default"
	I0601 11:08:33.174796   23498 default_sa.go:55] duration metric: took 191.511248ms for default service account to be created ...
	I0601 11:08:33.174804   23498 system_pods.go:116] waiting for k8s-apps to be running ...
	I0601 11:08:33.376075   23498 system_pods.go:86] 7 kube-system pods found
	I0601 11:08:33.376104   23498 system_pods.go:89] "coredns-64897985d-cfd9b" [33da1e8b-2c7a-4988-9dfe-3162061c879e] Running
	I0601 11:08:33.376113   23498 system_pods.go:89] "etcd-pause-20220601110620-7337" [d8fa1287-1138-46bd-ab96-01cf4324fd0a] Running
	I0601 11:08:33.376120   23498 system_pods.go:89] "kube-apiserver-pause-20220601110620-7337" [c09b0b2b-da78-4b5f-98d4-471f3ecfc3c1] Running
	I0601 11:08:33.376127   23498 system_pods.go:89] "kube-controller-manager-pause-20220601110620-7337" [08519271-d393-4c31-b3b5-166bddc4c3ca] Running
	I0601 11:08:33.376134   23498 system_pods.go:89] "kube-proxy-khg8x" [57bb2264-4bf6-4bf6-8d33-a600f8a192a4] Running
	I0601 11:08:33.376146   23498 system_pods.go:89] "kube-scheduler-pause-20220601110620-7337" [34099a13-3eb0-49fe-a4cc-721b2e7a9159] Running
	I0601 11:08:33.376159   23498 system_pods.go:89] "storage-provisioner" [f30ddd6b-0b90-4a3a-88d9-ea548cf1fb27] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0601 11:08:33.376172   23498 system_pods.go:126] duration metric: took 201.361877ms to wait for k8s-apps to be running ...
	I0601 11:08:33.376183   23498 system_svc.go:44] waiting for kubelet service to be running ....
	I0601 11:08:33.376232   23498 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0601 11:08:33.393190   23498 system_svc.go:56] duration metric: took 17.00178ms WaitForService to wait for kubelet.
	I0601 11:08:33.393214   23498 kubeadm.go:572] duration metric: took 1.945164231s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0601 11:08:33.393236   23498 node_conditions.go:102] verifying NodePressure condition ...
	I0601 11:08:33.576144   23498 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0601 11:08:33.576172   23498 node_conditions.go:123] node cpu capacity is 2
	I0601 11:08:33.576182   23498 node_conditions.go:105] duration metric: took 182.941908ms to run NodePressure ...
	I0601 11:08:33.576192   23498 start.go:213] waiting for startup goroutines ...
	I0601 11:08:33.617996   23498 start.go:504] kubectl: 1.24.1, cluster: 1.23.6 (minor skew: 1)
	I0601 11:08:33.620285   23498 out.go:177] * Done! kubectl is now configured to use "pause-20220601110620-7337" cluster and "default" namespace by default
	I0601 11:08:31.168601   23916 main.go:134] libmachine: (NoKubernetes-20220601110707-7337) DBG | domain NoKubernetes-20220601110707-7337 has defined MAC address 52:54:00:a1:9b:e6 in network mk-NoKubernetes-20220601110707-7337
	I0601 11:08:31.169127   23916 main.go:134] libmachine: (NoKubernetes-20220601110707-7337) DBG | unable to find current IP address of domain NoKubernetes-20220601110707-7337 in network mk-NoKubernetes-20220601110707-7337
	I0601 11:08:31.169144   23916 main.go:134] libmachine: (NoKubernetes-20220601110707-7337) DBG | I0601 11:08:31.169032   23939 retry.go:31] will retry after 987.362415ms: waiting for machine to come up
	I0601 11:08:32.158391   23916 main.go:134] libmachine: (NoKubernetes-20220601110707-7337) DBG | domain NoKubernetes-20220601110707-7337 has defined MAC address 52:54:00:a1:9b:e6 in network mk-NoKubernetes-20220601110707-7337
	I0601 11:08:32.158896   23916 main.go:134] libmachine: (NoKubernetes-20220601110707-7337) DBG | unable to find current IP address of domain NoKubernetes-20220601110707-7337 in network mk-NoKubernetes-20220601110707-7337
	I0601 11:08:32.158918   23916 main.go:134] libmachine: (NoKubernetes-20220601110707-7337) DBG | I0601 11:08:32.158843   23939 retry.go:31] will retry after 1.189835008s: waiting for machine to come up
	I0601 11:08:33.350255   23916 main.go:134] libmachine: (NoKubernetes-20220601110707-7337) DBG | domain NoKubernetes-20220601110707-7337 has defined MAC address 52:54:00:a1:9b:e6 in network mk-NoKubernetes-20220601110707-7337
	I0601 11:08:33.350743   23916 main.go:134] libmachine: (NoKubernetes-20220601110707-7337) DBG | unable to find current IP address of domain NoKubernetes-20220601110707-7337 in network mk-NoKubernetes-20220601110707-7337
	I0601 11:08:33.350777   23916 main.go:134] libmachine: (NoKubernetes-20220601110707-7337) DBG | I0601 11:08:33.350722   23939 retry.go:31] will retry after 1.677229867s: waiting for machine to come up
	I0601 11:08:35.029431   23916 main.go:134] libmachine: (NoKubernetes-20220601110707-7337) DBG | domain NoKubernetes-20220601110707-7337 has defined MAC address 52:54:00:a1:9b:e6 in network mk-NoKubernetes-20220601110707-7337
	I0601 11:08:35.029928   23916 main.go:134] libmachine: (NoKubernetes-20220601110707-7337) DBG | unable to find current IP address of domain NoKubernetes-20220601110707-7337 in network mk-NoKubernetes-20220601110707-7337
	I0601 11:08:35.029954   23916 main.go:134] libmachine: (NoKubernetes-20220601110707-7337) DBG | I0601 11:08:35.029881   23939 retry.go:31] will retry after 2.346016261s: waiting for machine to come up
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	4fecce19f81a8       6e38f40d628db       3 seconds ago        Running             storage-provisioner       0                   dc4d5114d6c4c
	d47589a815122       a4ca41631cc7a       17 seconds ago       Running             coredns                   1                   f549f250919c9
	7faa949313182       4c03754524064       18 seconds ago       Running             kube-proxy                1                   ccb92bcae55e9
	a093596174772       595f327f224a4       23 seconds ago       Running             kube-scheduler            1                   82ab164bfcc70
	43e0059e63892       df7b72818ad2e       24 seconds ago       Running             kube-controller-manager   1                   bfe9b69611c25
	311b96fbcda82       25f8c7f3da61c       24 seconds ago       Running             etcd                      1                   a77581603b9fe
	1c5a78f9d34b4       8fa62c12256df       24 seconds ago       Running             kube-apiserver            1                   9a28dc3c3ac57
	210d0cb9ee945       a4ca41631cc7a       54 seconds ago       Exited              coredns                   0                   70c4246720ef5
	6e9d7e184abed       4c03754524064       56 seconds ago       Exited              kube-proxy                0                   8cd8b5398a96f
	cd261e2d2ff3b       25f8c7f3da61c       About a minute ago   Exited              etcd                      0                   aa1161c9a565f
	5b9eea7e9f630       595f327f224a4       About a minute ago   Exited              kube-scheduler            0                   a93feceb1c90c
	ad8fa72d1866a       df7b72818ad2e       About a minute ago   Exited              kube-controller-manager   0                   de728a96e41e0
	56b11b45ac07d       8fa62c12256df       About a minute ago   Exited              kube-apiserver            0                   40aa4efa0be8d
	
	* 
	* ==> containerd <==
	* -- Journal begins at Wed 2022-06-01 11:06:38 UTC, ends at Wed 2022-06-01 11:08:36 UTC. --
	Jun 01 11:08:18 pause-20220601110620-7337 containerd[3769]: time="2022-06-01T11:08:18.851214019Z" level=info msg="StopPodSandbox for \"70c4246720ef5a606dd9e5e6f3da85828e5fdf9d8a2f1a949c2be8c089ec1ee2\""
	Jun 01 11:08:18 pause-20220601110620-7337 containerd[3769]: time="2022-06-01T11:08:18.851306939Z" level=info msg="Container to stop \"210d0cb9ee945ff5c4e7df4bfb8670d8d1737c543cb60322309d344491b21cb2\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Jun 01 11:08:18 pause-20220601110620-7337 containerd[3769]: time="2022-06-01T11:08:18.937172722Z" level=info msg="StartContainer for \"7faa94931318263b7fb674322582984fa4ba2d560fc1092bbf6b47a1a27ca6a2\" returns successfully"
	Jun 01 11:08:18 pause-20220601110620-7337 containerd[3769]: time="2022-06-01T11:08:18.937683701Z" level=info msg="TearDown network for sandbox \"70c4246720ef5a606dd9e5e6f3da85828e5fdf9d8a2f1a949c2be8c089ec1ee2\" successfully"
	Jun 01 11:08:18 pause-20220601110620-7337 containerd[3769]: time="2022-06-01T11:08:18.937813409Z" level=info msg="StopPodSandbox for \"70c4246720ef5a606dd9e5e6f3da85828e5fdf9d8a2f1a949c2be8c089ec1ee2\" returns successfully"
	Jun 01 11:08:18 pause-20220601110620-7337 containerd[3769]: time="2022-06-01T11:08:18.938346116Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-64897985d-cfd9b,Uid:33da1e8b-2c7a-4988-9dfe-3162061c879e,Namespace:kube-system,Attempt:1,}"
	Jun 01 11:08:19 pause-20220601110620-7337 containerd[3769]: time="2022-06-01T11:08:19.081212884Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 01 11:08:19 pause-20220601110620-7337 containerd[3769]: time="2022-06-01T11:08:19.081487460Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 01 11:08:19 pause-20220601110620-7337 containerd[3769]: time="2022-06-01T11:08:19.081506222Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 01 11:08:19 pause-20220601110620-7337 containerd[3769]: time="2022-06-01T11:08:19.082270446Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f549f250919c9b052dbdacda321a7afa42d4fbaec54b233b0c62f63ff09070d3 pid=4861 runtime=io.containerd.runc.v2
	Jun 01 11:08:19 pause-20220601110620-7337 containerd[3769]: time="2022-06-01T11:08:19.532795688Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-64897985d-cfd9b,Uid:33da1e8b-2c7a-4988-9dfe-3162061c879e,Namespace:kube-system,Attempt:1,} returns sandbox id \"f549f250919c9b052dbdacda321a7afa42d4fbaec54b233b0c62f63ff09070d3\""
	Jun 01 11:08:19 pause-20220601110620-7337 containerd[3769]: time="2022-06-01T11:08:19.536695036Z" level=info msg="CreateContainer within sandbox \"f549f250919c9b052dbdacda321a7afa42d4fbaec54b233b0c62f63ff09070d3\" for container &ContainerMetadata{Name:coredns,Attempt:1,}"
	Jun 01 11:08:19 pause-20220601110620-7337 containerd[3769]: time="2022-06-01T11:08:19.567893484Z" level=info msg="CreateContainer within sandbox \"f549f250919c9b052dbdacda321a7afa42d4fbaec54b233b0c62f63ff09070d3\" for &ContainerMetadata{Name:coredns,Attempt:1,} returns container id \"d47589a81512246789bea9be7fefc1a62aab5d532059fb8d70b2dd3e68d41b17\""
	Jun 01 11:08:19 pause-20220601110620-7337 containerd[3769]: time="2022-06-01T11:08:19.569306290Z" level=info msg="StartContainer for \"d47589a81512246789bea9be7fefc1a62aab5d532059fb8d70b2dd3e68d41b17\""
	Jun 01 11:08:19 pause-20220601110620-7337 containerd[3769]: time="2022-06-01T11:08:19.680383933Z" level=info msg="StartContainer for \"d47589a81512246789bea9be7fefc1a62aab5d532059fb8d70b2dd3e68d41b17\" returns successfully"
	Jun 01 11:08:32 pause-20220601110620-7337 containerd[3769]: time="2022-06-01T11:08:32.827482914Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:storage-provisioner,Uid:f30ddd6b-0b90-4a3a-88d9-ea548cf1fb27,Namespace:kube-system,Attempt:0,}"
	Jun 01 11:08:32 pause-20220601110620-7337 containerd[3769]: time="2022-06-01T11:08:32.853281542Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 01 11:08:32 pause-20220601110620-7337 containerd[3769]: time="2022-06-01T11:08:32.853394450Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 01 11:08:32 pause-20220601110620-7337 containerd[3769]: time="2022-06-01T11:08:32.853406138Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 01 11:08:32 pause-20220601110620-7337 containerd[3769]: time="2022-06-01T11:08:32.853880659Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/dc4d5114d6c4cd25baa25d9baf09c7be8d95597da1b311305c7046740723a809 pid=5029 runtime=io.containerd.runc.v2
	Jun 01 11:08:33 pause-20220601110620-7337 containerd[3769]: time="2022-06-01T11:08:33.252464975Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:storage-provisioner,Uid:f30ddd6b-0b90-4a3a-88d9-ea548cf1fb27,Namespace:kube-system,Attempt:0,} returns sandbox id \"dc4d5114d6c4cd25baa25d9baf09c7be8d95597da1b311305c7046740723a809\""
	Jun 01 11:08:33 pause-20220601110620-7337 containerd[3769]: time="2022-06-01T11:08:33.260126399Z" level=info msg="CreateContainer within sandbox \"dc4d5114d6c4cd25baa25d9baf09c7be8d95597da1b311305c7046740723a809\" for container &ContainerMetadata{Name:storage-provisioner,Attempt:0,}"
	Jun 01 11:08:33 pause-20220601110620-7337 containerd[3769]: time="2022-06-01T11:08:33.305293268Z" level=info msg="CreateContainer within sandbox \"dc4d5114d6c4cd25baa25d9baf09c7be8d95597da1b311305c7046740723a809\" for &ContainerMetadata{Name:storage-provisioner,Attempt:0,} returns container id \"4fecce19f81a84509183048e804a408cf72b6e089e3c52436d0d708b223d1260\""
	Jun 01 11:08:33 pause-20220601110620-7337 containerd[3769]: time="2022-06-01T11:08:33.309501955Z" level=info msg="StartContainer for \"4fecce19f81a84509183048e804a408cf72b6e089e3c52436d0d708b223d1260\""
	Jun 01 11:08:33 pause-20220601110620-7337 containerd[3769]: time="2022-06-01T11:08:33.418311103Z" level=info msg="StartContainer for \"4fecce19f81a84509183048e804a408cf72b6e089e3c52436d0d708b223d1260\" returns successfully"
	
	* 
	* ==> coredns [210d0cb9ee945ff5c4e7df4bfb8670d8d1737c543cb60322309d344491b21cb2] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	
	* 
	* ==> coredns [d47589a81512246789bea9be7fefc1a62aab5d532059fb8d70b2dd3e68d41b17] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = 7ae91e86dd75dee9ae501cb58003198b
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	
	* 
	* ==> describe nodes <==
	* Name:               pause-20220601110620-7337
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-20220601110620-7337
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4a356b2b7b41c6be3e1e342298908c27bb98ce92
	                    minikube.k8s.io/name=pause-20220601110620-7337
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_06_01T11_07_27_0700
	                    minikube.k8s.io/version=v1.26.0-beta.1
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 01 Jun 2022 11:07:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-20220601110620-7337
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 01 Jun 2022 11:08:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 01 Jun 2022 11:08:17 +0000   Wed, 01 Jun 2022 11:07:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 01 Jun 2022 11:08:17 +0000   Wed, 01 Jun 2022 11:07:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 01 Jun 2022 11:08:17 +0000   Wed, 01 Jun 2022 11:07:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 01 Jun 2022 11:08:17 +0000   Wed, 01 Jun 2022 11:07:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.64
	  Hostname:    pause-20220601110620-7337
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2034396Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2034396Ki
	  pods:               110
	System Info:
	  Machine ID:                 5041ca1397ac4627af301b21abe77af4
	  System UUID:                5041ca13-97ac-4627-af30-1b21abe77af4
	  Boot ID:                    8025d7d0-7d07-45f9-8b61-1e2014e58b60
	  Kernel Version:             4.19.235
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.6.4
	  Kubelet Version:            v1.23.6
	  Kube-Proxy Version:         v1.23.6
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-64897985d-cfd9b                              100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     58s
	  kube-system                 etcd-pause-20220601110620-7337                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         65s
	  kube-system                 kube-apiserver-pause-20220601110620-7337             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         71s
	  kube-system                 kube-controller-manager-pause-20220601110620-7337    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         65s
	  kube-system                 kube-proxy-khg8x                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         58s
	  kube-system                 kube-scheduler-pause-20220601110620-7337             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         65s
	  kube-system                 storage-provisioner                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From        Message
	  ----    ------                   ----               ----        -------
	  Normal  Starting                 18s                kube-proxy  
	  Normal  Starting                 56s                kube-proxy  
	  Normal  NodeHasSufficientMemory  80s (x4 over 80s)  kubelet     Node pause-20220601110620-7337 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    80s (x4 over 80s)  kubelet     Node pause-20220601110620-7337 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     80s (x4 over 80s)  kubelet     Node pause-20220601110620-7337 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  80s                kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 80s                kubelet     Starting kubelet.
	  Normal  NodeHasNoDiskPressure    65s                kubelet     Node pause-20220601110620-7337 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     65s                kubelet     Node pause-20220601110620-7337 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  65s                kubelet     Node pause-20220601110620-7337 status is now: NodeHasSufficientMemory
	  Normal  Starting                 65s                kubelet     Starting kubelet.
	  Normal  NodeAllocatableEnforced  65s                kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                59s                kubelet     Node pause-20220601110620-7337 status is now: NodeReady
	  Normal  Starting                 27s                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  26s (x8 over 27s)  kubelet     Node pause-20220601110620-7337 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    26s (x8 over 27s)  kubelet     Node pause-20220601110620-7337 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     26s (x7 over 27s)  kubelet     Node pause-20220601110620-7337 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  26s                kubelet     Updated Node Allocatable limit across pods
	
	* 
	* ==> dmesg <==
	* [  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.040011] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.028005] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.569727] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.360382] systemd-fstab-generator[1165]: Ignoring "noauto" for root device
	[  +0.181064] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000001] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +0.802847] SELinux: unrecognized netlink message: protocol=0 nlmsg_type=106 sclass=netlink_route_socket pid=1737 comm=systemd-network
	[  +3.116514] NFSD: the nfsdcld client tracking upcall will be removed in 3.10. Please transition to using nfsdcltrack.
	[ +18.267726] systemd-fstab-generator[2153]: Ignoring "noauto" for root device
	[Jun 1 11:07] systemd-fstab-generator[2186]: Ignoring "noauto" for root device
	[  +0.161618] systemd-fstab-generator[2197]: Ignoring "noauto" for root device
	[  +0.368332] systemd-fstab-generator[2231]: Ignoring "noauto" for root device
	[  +6.441451] systemd-fstab-generator[2429]: Ignoring "noauto" for root device
	[ +15.264602] systemd-fstab-generator[2814]: Ignoring "noauto" for root device
	[ +13.643916] kauditd_printk_skb: 38 callbacks suppressed
	[  +7.474164] kauditd_printk_skb: 86 callbacks suppressed
	[  +5.478041] kauditd_printk_skb: 20 callbacks suppressed
	[  +2.719313] systemd-fstab-generator[3722]: Ignoring "noauto" for root device
	[  +0.149449] systemd-fstab-generator[3733]: Ignoring "noauto" for root device
	[  +0.315115] systemd-fstab-generator[3761]: Ignoring "noauto" for root device
	[  +3.842952] kauditd_printk_skb: 8 callbacks suppressed
	[Jun 1 11:08] systemd-fstab-generator[4279]: Ignoring "noauto" for root device
	[ +13.219223] kauditd_printk_skb: 53 callbacks suppressed
	[ +11.617220] kauditd_printk_skb: 23 callbacks suppressed
	
	* 
	* ==> etcd [311b96fbcda822ef4b02d7d96f277235c4c1f3b7646779f4da95885b73b546ac] <==
	* {"level":"info","ts":"2022-06-01T11:08:13.941Z","caller":"etcdserver/server.go:843","msg":"starting etcd server","local-member-id":"7e00f7fcc1a7adc9","local-server-version":"3.5.1","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2022-06-01T11:08:13.962Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-06-01T11:08:13.966Z","caller":"etcdserver/server.go:744","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2022-06-01T11:08:13.966Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7e00f7fcc1a7adc9 switched to configuration voters=(9079529513731730889)"}
	{"level":"info","ts":"2022-06-01T11:08:13.966Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"c6005d374c1772c0","local-member-id":"7e00f7fcc1a7adc9","added-peer-id":"7e00f7fcc1a7adc9","added-peer-peer-urls":["https://192.168.50.64:2380"]}
	{"level":"info","ts":"2022-06-01T11:08:13.966Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"c6005d374c1772c0","local-member-id":"7e00f7fcc1a7adc9","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-01T11:08:13.966Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-06-01T11:08:13.973Z","caller":"embed/etcd.go:276","msg":"now serving peer/client/metrics","local-member-id":"7e00f7fcc1a7adc9","initial-advertise-peer-urls":["https://192.168.50.64:2380"],"listen-peer-urls":["https://192.168.50.64:2380"],"advertise-client-urls":["https://192.168.50.64:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.64:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-06-01T11:08:13.973Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-06-01T11:08:13.974Z","caller":"embed/etcd.go:580","msg":"serving peer traffic","address":"192.168.50.64:2380"}
	{"level":"info","ts":"2022-06-01T11:08:13.974Z","caller":"embed/etcd.go:552","msg":"cmux::serve","address":"192.168.50.64:2380"}
	{"level":"info","ts":"2022-06-01T11:08:14.805Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7e00f7fcc1a7adc9 is starting a new election at term 2"}
	{"level":"info","ts":"2022-06-01T11:08:14.805Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7e00f7fcc1a7adc9 became pre-candidate at term 2"}
	{"level":"info","ts":"2022-06-01T11:08:14.805Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7e00f7fcc1a7adc9 received MsgPreVoteResp from 7e00f7fcc1a7adc9 at term 2"}
	{"level":"info","ts":"2022-06-01T11:08:14.805Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7e00f7fcc1a7adc9 became candidate at term 3"}
	{"level":"info","ts":"2022-06-01T11:08:14.805Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7e00f7fcc1a7adc9 received MsgVoteResp from 7e00f7fcc1a7adc9 at term 3"}
	{"level":"info","ts":"2022-06-01T11:08:14.805Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7e00f7fcc1a7adc9 became leader at term 3"}
	{"level":"info","ts":"2022-06-01T11:08:14.805Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7e00f7fcc1a7adc9 elected leader 7e00f7fcc1a7adc9 at term 3"}
	{"level":"info","ts":"2022-06-01T11:08:14.808Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"7e00f7fcc1a7adc9","local-member-attributes":"{Name:pause-20220601110620-7337 ClientURLs:[https://192.168.50.64:2379]}","request-path":"/0/members/7e00f7fcc1a7adc9/attributes","cluster-id":"c6005d374c1772c0","publish-timeout":"7s"}
	{"level":"info","ts":"2022-06-01T11:08:14.809Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-06-01T11:08:14.809Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-06-01T11:08:14.811Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-06-01T11:08:14.818Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.50.64:2379"}
	{"level":"info","ts":"2022-06-01T11:08:14.820Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-06-01T11:08:14.820Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	
	* 
	* ==> etcd [cd261e2d2ff3b0de0b3fe0411dec2110ca530014dcdc702e0acd927e9d6fd7f8] <==
	* {"level":"warn","ts":"2022-06-01T11:07:43.799Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"768.029089ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-64897985d-b6g8m\" ","response":"range_response_count:1 size:4626"}
	{"level":"info","ts":"2022-06-01T11:07:43.799Z","caller":"traceutil/trace.go:171","msg":"trace[1963439830] range","detail":"{range_begin:/registry/pods/kube-system/coredns-64897985d-b6g8m; range_end:; response_count:1; response_revision:470; }","duration":"768.084544ms","start":"2022-06-01T11:07:43.031Z","end":"2022-06-01T11:07:43.799Z","steps":["trace[1963439830] 'agreement among raft nodes before linearized reading'  (duration: 767.996415ms)"],"step_count":1}
	{"level":"warn","ts":"2022-06-01T11:07:43.799Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-06-01T11:07:43.031Z","time spent":"768.126345ms","remote":"127.0.0.1:48750","response type":"/etcdserverpb.KV/Range","request count":0,"request size":52,"response count":1,"response size":4649,"request content":"key:\"/registry/pods/kube-system/coredns-64897985d-b6g8m\" "}
	{"level":"info","ts":"2022-06-01T11:07:43.898Z","caller":"traceutil/trace.go:171","msg":"trace[205847652] linearizableReadLoop","detail":"{readStateIndex:483; appliedIndex:483; }","duration":"101.356832ms","start":"2022-06-01T11:07:43.797Z","end":"2022-06-01T11:07:43.898Z","steps":["trace[205847652] 'read index received'  (duration: 101.350079ms)","trace[205847652] 'applied index is now lower than readState.Index'  (duration: 5.845µs)"],"step_count":2}
	{"level":"warn","ts":"2022-06-01T11:07:44.441Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"1.040582734s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2022-06-01T11:07:44.441Z","caller":"traceutil/trace.go:171","msg":"trace[1770652811] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:470; }","duration":"1.040654584s","start":"2022-06-01T11:07:43.401Z","end":"2022-06-01T11:07:44.441Z","steps":["trace[1770652811] 'agreement among raft nodes before linearized reading'  (duration: 497.956946ms)","trace[1770652811] 'range keys from in-memory index tree'  (duration: 542.609703ms)"],"step_count":2}
	{"level":"warn","ts":"2022-06-01T11:07:44.441Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-06-01T11:07:43.401Z","time spent":"1.040711815s","remote":"127.0.0.1:48848","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2022-06-01T11:07:44.442Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"1.183534976s","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2022-06-01T11:07:44.442Z","caller":"traceutil/trace.go:171","msg":"trace[31237833] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:470; }","duration":"1.183603526s","start":"2022-06-01T11:07:43.258Z","end":"2022-06-01T11:07:44.442Z","steps":["trace[31237833] 'agreement among raft nodes before linearized reading'  (duration: 640.649127ms)","trace[31237833] 'range keys from in-memory index tree'  (duration: 542.873748ms)"],"step_count":2}
	{"level":"warn","ts":"2022-06-01T11:07:44.442Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"543.112777ms","expected-duration":"100ms","prefix":"","request":"header:<ID:12522682208803674963 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/coredns-64897985d-b6g8m.16f47a8272a1f6cf\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/coredns-64897985d-b6g8m.16f47a8272a1f6cf\" value_size:625 lease:3299310171948898683 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2022-06-01T11:07:44.442Z","caller":"traceutil/trace.go:171","msg":"trace[57789242] linearizableReadLoop","detail":"{readStateIndex:484; appliedIndex:483; }","duration":"543.237143ms","start":"2022-06-01T11:07:43.899Z","end":"2022-06-01T11:07:44.442Z","steps":["trace[57789242] 'read index received'  (duration: 325.465367ms)","trace[57789242] 'applied index is now lower than readState.Index'  (duration: 217.77096ms)"],"step_count":2}
	{"level":"info","ts":"2022-06-01T11:07:44.442Z","caller":"traceutil/trace.go:171","msg":"trace[1011451717] transaction","detail":"{read_only:false; response_revision:471; number_of_response:1; }","duration":"599.254141ms","start":"2022-06-01T11:07:43.843Z","end":"2022-06-01T11:07:44.442Z","steps":["trace[1011451717] 'process raft request'  (duration: 56.07336ms)","trace[1011451717] 'compare'  (duration: 542.234257ms)"],"step_count":2}
	{"level":"warn","ts":"2022-06-01T11:07:44.442Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"637.316849ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/pause-20220601110620-7337\" ","response":"range_response_count:1 size:4578"}
	{"level":"warn","ts":"2022-06-01T11:07:44.442Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-06-01T11:07:43.843Z","time spent":"599.310289ms","remote":"127.0.0.1:48728","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":712,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/events/kube-system/coredns-64897985d-b6g8m.16f47a8272a1f6cf\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/coredns-64897985d-b6g8m.16f47a8272a1f6cf\" value_size:625 lease:3299310171948898683 >> failure:<>"}
	{"level":"info","ts":"2022-06-01T11:07:44.442Z","caller":"traceutil/trace.go:171","msg":"trace[169433543] range","detail":"{range_begin:/registry/minions/pause-20220601110620-7337; range_end:; response_count:1; response_revision:471; }","duration":"637.330503ms","start":"2022-06-01T11:07:43.805Z","end":"2022-06-01T11:07:44.442Z","steps":["trace[169433543] 'agreement among raft nodes before linearized reading'  (duration: 637.269331ms)"],"step_count":1}
	{"level":"warn","ts":"2022-06-01T11:07:44.442Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-06-01T11:07:43.805Z","time spent":"637.462524ms","remote":"127.0.0.1:48748","response type":"/etcdserverpb.KV/Range","request count":0,"request size":45,"response count":1,"response size":4601,"request content":"key:\"/registry/minions/pause-20220601110620-7337\" "}
	{"level":"warn","ts":"2022-06-01T11:07:45.784Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"271.937353ms","expected-duration":"100ms","prefix":"","request":"header:<ID:12522682208803674976 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.50.64\" mod_revision:303 > success:<request_put:<key:\"/registry/masterleases/192.168.50.64\" value_size:68 lease:3299310171948899166 >> failure:<request_range:<key:\"/registry/masterleases/192.168.50.64\" > >>","response":"size:16"}
	{"level":"info","ts":"2022-06-01T11:07:45.785Z","caller":"traceutil/trace.go:171","msg":"trace[561756336] linearizableReadLoop","detail":"{readStateIndex:486; appliedIndex:485; }","duration":"383.551551ms","start":"2022-06-01T11:07:45.401Z","end":"2022-06-01T11:07:45.785Z","steps":["trace[561756336] 'read index received'  (duration: 111.06061ms)","trace[561756336] 'applied index is now lower than readState.Index'  (duration: 272.488067ms)"],"step_count":2}
	{"level":"info","ts":"2022-06-01T11:07:45.785Z","caller":"traceutil/trace.go:171","msg":"trace[1748133338] transaction","detail":"{read_only:false; response_revision:472; number_of_response:1; }","duration":"394.410686ms","start":"2022-06-01T11:07:45.390Z","end":"2022-06-01T11:07:45.785Z","steps":["trace[1748133338] 'process raft request'  (duration: 121.756441ms)","trace[1748133338] 'compare'  (duration: 271.824414ms)"],"step_count":2}
	{"level":"warn","ts":"2022-06-01T11:07:45.785Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-06-01T11:07:45.390Z","time spent":"394.906543ms","remote":"127.0.0.1:48724","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":120,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/masterleases/192.168.50.64\" mod_revision:303 > success:<request_put:<key:\"/registry/masterleases/192.168.50.64\" value_size:68 lease:3299310171948899166 >> failure:<request_range:<key:\"/registry/masterleases/192.168.50.64\" > >"}
	{"level":"warn","ts":"2022-06-01T11:07:45.785Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"383.875733ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2022-06-01T11:07:45.786Z","caller":"traceutil/trace.go:171","msg":"trace[433450755] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:472; }","duration":"384.844591ms","start":"2022-06-01T11:07:45.401Z","end":"2022-06-01T11:07:45.786Z","steps":["trace[433450755] 'agreement among raft nodes before linearized reading'  (duration: 383.825733ms)"],"step_count":1}
	{"level":"warn","ts":"2022-06-01T11:07:45.786Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-06-01T11:07:45.401Z","time spent":"385.028773ms","remote":"127.0.0.1:48848","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2022-06-01T11:07:45.786Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"254.914514ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-64897985d-b6g8m\" ","response":"range_response_count:1 size:4626"}
	{"level":"info","ts":"2022-06-01T11:07:45.787Z","caller":"traceutil/trace.go:171","msg":"trace[1462854947] range","detail":"{range_begin:/registry/pods/kube-system/coredns-64897985d-b6g8m; range_end:; response_count:1; response_revision:472; }","duration":"255.857807ms","start":"2022-06-01T11:07:45.531Z","end":"2022-06-01T11:07:45.787Z","steps":["trace[1462854947] 'agreement among raft nodes before linearized reading'  (duration: 254.620124ms)"],"step_count":1}
	
	* 
	* ==> kernel <==
	*  11:08:37 up 2 min,  0 users,  load average: 1.53, 0.65, 0.24
	Linux pause-20220601110620-7337 4.19.235 #1 SMP Fri May 27 20:55:39 UTC 2022 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [1c5a78f9d34b4a64a73c941808eb4140890ab7b2e83154479eda32adbecd78e2] <==
	* I0601 11:08:17.193319       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0601 11:08:17.199948       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0601 11:08:17.200124       1 shared_informer.go:240] Waiting for caches to sync for crd-autoregister
	I0601 11:08:17.200353       1 dynamic_cafile_content.go:156] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0601 11:08:17.220284       1 dynamic_cafile_content.go:156] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0601 11:08:17.240622       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	E0601 11:08:17.241917       1 controller.go:157] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I0601 11:08:17.245322       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0601 11:08:17.246327       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0601 11:08:17.254083       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0601 11:08:17.265656       1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller 
	I0601 11:08:17.265679       1 cache.go:39] Caches are synced for autoregister controller
	I0601 11:08:17.300671       1 shared_informer.go:247] Caches are synced for crd-autoregister 
	I0601 11:08:17.322886       1 shared_informer.go:247] Caches are synced for node_authorizer 
	I0601 11:08:18.122709       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0601 11:08:18.137365       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0601 11:08:18.163164       1 storage_scheduling.go:109] all system priority classes are created successfully or already exist.
	I0601 11:08:18.702638       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0601 11:08:18.715526       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0601 11:08:18.779780       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0601 11:08:18.812809       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0601 11:08:18.819759       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0601 11:08:19.043263       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0601 11:08:30.219530       1 controller.go:611] quota admission added evaluator for: endpoints
	I0601 11:08:30.451942       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	* 
	* ==> kube-apiserver [56b11b45ac07d6beafde3aaf8283c976fc2b48fe111f9be2f406a2c0d0a3009b] <==
	* I0601 11:07:32.275471       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0601 11:07:39.688249       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0601 11:07:39.719349       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0601 11:07:40.954295       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0601 11:07:43.800418       1 trace.go:205] Trace[1495120344]: "GuaranteedUpdate etcd3" type:*discovery.EndpointSlice (01-Jun-2022 11:07:42.782) (total time: 1018ms):
	Trace[1495120344]: ---"Transaction committed" 1017ms (11:07:43.800)
	Trace[1495120344]: [1.018261732s] [1.018261732s] END
	I0601 11:07:43.801519       1 trace.go:205] Trace[1985781852]: "Update" url:/apis/discovery.k8s.io/v1/namespaces/kube-system/endpointslices/kube-dns-r5ck2,user-agent:kube-controller-manager/v1.23.6 (linux/amd64) kubernetes/ad33385/system:serviceaccount:kube-system:endpointslice-controller,audit-id:8af30ceb-f3cd-4372-9802-2cac9ac89934,client:192.168.50.64,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (01-Jun-2022 11:07:42.781) (total time: 1019ms):
	Trace[1985781852]: ---"Object stored in database" 1019ms (11:07:43.801)
	Trace[1985781852]: [1.019584782s] [1.019584782s] END
	I0601 11:07:43.802884       1 trace.go:205] Trace[1809827139]: "Get" url:/api/v1/namespaces/kube-system/pods/coredns-64897985d-b6g8m,user-agent:minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format,audit-id:e2d9d8f3-dd34-4dd7-8f75-9357e0b36189,client:192.168.50.1,accept:application/json, */*,protocol:HTTP/2.0 (01-Jun-2022 11:07:43.030) (total time: 772ms):
	Trace[1809827139]: ---"About to write a response" 771ms (11:07:43.802)
	Trace[1809827139]: [772.377579ms] [772.377579ms] END
	I0601 11:07:43.803647       1 trace.go:205] Trace[636711164]: "GuaranteedUpdate etcd3" type:*core.Endpoints (01-Jun-2022 11:07:42.788) (total time: 1015ms):
	Trace[636711164]: ---"Transaction committed" 1014ms (11:07:43.803)
	Trace[636711164]: [1.015287574s] [1.015287574s] END
	I0601 11:07:43.804152       1 trace.go:205] Trace[1043678735]: "Update" url:/api/v1/namespaces/kube-system/endpoints/kube-dns,user-agent:kube-controller-manager/v1.23.6 (linux/amd64) kubernetes/ad33385/system:serviceaccount:kube-system:endpoint-controller,audit-id:06cae169-1606-4a25-a337-aed7947e3553,client:192.168.50.64,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (01-Jun-2022 11:07:42.788) (total time: 1015ms):
	Trace[1043678735]: ---"Object stored in database" 1015ms (11:07:43.804)
	Trace[1043678735]: [1.015946768s] [1.015946768s] END
	I0601 11:07:44.445615       1 trace.go:205] Trace[827037909]: "Create" url:/api/v1/namespaces/kube-system/events,user-agent:kubelet/v1.23.6 (linux/amd64) kubernetes/ad33385,audit-id:5a6baec0-28d4-4066-b5ee-09be29ce7efe,client:192.168.50.64,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (01-Jun-2022 11:07:43.841) (total time: 604ms):
	Trace[827037909]: ---"Object stored in database" 603ms (11:07:44.445)
	Trace[827037909]: [604.172133ms] [604.172133ms] END
	I0601 11:07:44.447566       1 trace.go:205] Trace[785115675]: "Get" url:/api/v1/nodes/pause-20220601110620-7337,user-agent:minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format,audit-id:7301f112-b626-416d-b8f2-98a97ad801af,client:192.168.50.1,accept:application/json, */*,protocol:HTTP/2.0 (01-Jun-2022 11:07:43.804) (total time: 642ms):
	Trace[785115675]: ---"About to write a response" 639ms (11:07:44.444)
	Trace[785115675]: [642.659186ms] [642.659186ms] END
	
	* 
	* ==> kube-controller-manager [43e0059e6389236d53b0c17d920ac076fb5f940727f5965cbcce60e344180360] <==
	* I0601 11:08:30.175194       1 shared_informer.go:247] Caches are synced for bootstrap_signer 
	I0601 11:08:30.181441       1 shared_informer.go:247] Caches are synced for endpoint 
	I0601 11:08:30.185152       1 shared_informer.go:247] Caches are synced for TTL 
	I0601 11:08:30.188375       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	I0601 11:08:30.190758       1 shared_informer.go:247] Caches are synced for ephemeral 
	I0601 11:08:30.192346       1 shared_informer.go:247] Caches are synced for ClusterRoleAggregator 
	I0601 11:08:30.194636       1 shared_informer.go:247] Caches are synced for job 
	I0601 11:08:30.196270       1 shared_informer.go:247] Caches are synced for crt configmap 
	I0601 11:08:30.197168       1 shared_informer.go:247] Caches are synced for deployment 
	I0601 11:08:30.218142       1 shared_informer.go:247] Caches are synced for HPA 
	I0601 11:08:30.222097       1 shared_informer.go:247] Caches are synced for disruption 
	I0601 11:08:30.222161       1 disruption.go:371] Sending events to api server.
	I0601 11:08:30.222432       1 shared_informer.go:247] Caches are synced for certificate-csrapproving 
	I0601 11:08:30.226081       1 shared_informer.go:247] Caches are synced for persistent volume 
	I0601 11:08:30.230236       1 shared_informer.go:247] Caches are synced for PVC protection 
	I0601 11:08:30.235467       1 shared_informer.go:247] Caches are synced for ReplicationController 
	I0601 11:08:30.307109       1 shared_informer.go:247] Caches are synced for namespace 
	I0601 11:08:30.370555       1 shared_informer.go:247] Caches are synced for service account 
	I0601 11:08:30.384290       1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring 
	I0601 11:08:30.390137       1 shared_informer.go:247] Caches are synced for resource quota 
	I0601 11:08:30.391384       1 shared_informer.go:247] Caches are synced for resource quota 
	I0601 11:08:30.438107       1 shared_informer.go:247] Caches are synced for endpoint_slice 
	I0601 11:08:30.889644       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0601 11:08:30.891037       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0601 11:08:30.891245       1 garbagecollector.go:155] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	
	* 
	* ==> kube-controller-manager [ad8fa72d1866ad4c1dc86626944739f7227b699d591b5ed6f510390f961b1dd0] <==
	* I0601 11:07:38.837543       1 shared_informer.go:247] Caches are synced for bootstrap_signer 
	I0601 11:07:38.841791       1 shared_informer.go:247] Caches are synced for ReplicaSet 
	I0601 11:07:38.860559       1 shared_informer.go:247] Caches are synced for HPA 
	I0601 11:07:38.860635       1 shared_informer.go:247] Caches are synced for persistent volume 
	I0601 11:07:38.860767       1 shared_informer.go:247] Caches are synced for service account 
	I0601 11:07:38.862438       1 shared_informer.go:247] Caches are synced for deployment 
	I0601 11:07:38.863337       1 shared_informer.go:247] Caches are synced for ClusterRoleAggregator 
	I0601 11:07:38.982571       1 shared_informer.go:247] Caches are synced for resource quota 
	I0601 11:07:39.009109       1 shared_informer.go:247] Caches are synced for daemon sets 
	I0601 11:07:39.022314       1 shared_informer.go:247] Caches are synced for resource quota 
	I0601 11:07:39.066468       1 shared_informer.go:247] Caches are synced for taint 
	I0601 11:07:39.066686       1 node_lifecycle_controller.go:1397] Initializing eviction metric for zone: 
	W0601 11:07:39.066810       1 node_lifecycle_controller.go:1012] Missing timestamp for Node pause-20220601110620-7337. Assuming now as a timestamp.
	I0601 11:07:39.066850       1 node_lifecycle_controller.go:1213] Controller detected that zone  is now in state Normal.
	I0601 11:07:39.067376       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0601 11:07:39.067916       1 event.go:294] "Event occurred" object="pause-20220601110620-7337" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-20220601110620-7337 event: Registered Node pause-20220601110620-7337 in Controller"
	I0601 11:07:39.490516       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0601 11:07:39.508249       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0601 11:07:39.508305       1 garbagecollector.go:155] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0601 11:07:39.696816       1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-64897985d to 2"
	I0601 11:07:39.736494       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-khg8x"
	I0601 11:07:39.778525       1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-64897985d to 1"
	I0601 11:07:39.818490       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-b6g8m"
	I0601 11:07:39.827919       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-cfd9b"
	I0601 11:07:39.873625       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-64897985d-b6g8m"
	
	* 
	* ==> kube-proxy [6e9d7e184abed4789b1f1d5e9279f2e6e10c04b7c1f2c361b24609a47937900c] <==
	* I0601 11:07:40.755419       1 node.go:163] Successfully retrieved node IP: 192.168.50.64
	I0601 11:07:40.755468       1 server_others.go:138] "Detected node IP" address="192.168.50.64"
	I0601 11:07:40.755538       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0601 11:07:40.905152       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0601 11:07:40.905196       1 server_others.go:206] "Using iptables Proxier"
	I0601 11:07:40.941277       1 server.go:656] "Version info" version="v1.23.6"
	I0601 11:07:40.946104       1 config.go:317] "Starting service config controller"
	I0601 11:07:40.946121       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0601 11:07:40.946147       1 config.go:226] "Starting endpoint slice config controller"
	I0601 11:07:40.946152       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0601 11:07:41.069829       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0601 11:07:41.095590       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-proxy [7faa94931318263b7fb674322582984fa4ba2d560fc1092bbf6b47a1a27ca6a2] <==
	* I0601 11:08:18.993468       1 node.go:163] Successfully retrieved node IP: 192.168.50.64
	I0601 11:08:18.993536       1 server_others.go:138] "Detected node IP" address="192.168.50.64"
	I0601 11:08:18.993564       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0601 11:08:19.038107       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0601 11:08:19.038152       1 server_others.go:206] "Using iptables Proxier"
	I0601 11:08:19.038453       1 server.go:656] "Version info" version="v1.23.6"
	I0601 11:08:19.039257       1 config.go:226] "Starting endpoint slice config controller"
	I0601 11:08:19.039308       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0601 11:08:19.039365       1 config.go:317] "Starting service config controller"
	I0601 11:08:19.039397       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0601 11:08:19.140186       1 shared_informer.go:247] Caches are synced for service config 
	I0601 11:08:19.140276       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	* 
	* ==> kube-scheduler [5b9eea7e9f630b4f732f8810f7ecbfacf550b07152d3c2ec94cb2a7d2f311190] <==
	* W0601 11:07:23.728209       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0601 11:07:23.728772       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0601 11:07:23.729127       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0601 11:07:23.729589       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0601 11:07:24.580109       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0601 11:07:24.580361       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0601 11:07:24.671793       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0601 11:07:24.671863       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0601 11:07:24.757030       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0601 11:07:24.757080       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0601 11:07:24.764925       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0601 11:07:24.765276       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0601 11:07:24.798305       1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0601 11:07:24.798365       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0601 11:07:24.828452       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0601 11:07:24.828502       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0601 11:07:24.850143       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0601 11:07:24.850191       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0601 11:07:24.889871       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0601 11:07:24.889923       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0601 11:07:24.917240       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0601 11:07:24.917289       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0601 11:07:24.948844       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0601 11:07:24.948898       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0601 11:07:26.918880       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kube-scheduler [a093596174772dc021711949d956abddf61b8ab0e6aa809b74482f373e0b6f69] <==
	* I0601 11:08:15.116305       1 serving.go:348] Generated self-signed cert in-memory
	W0601 11:08:17.177658       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0601 11:08:17.177682       1 authentication.go:345] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0601 11:08:17.177769       1 authentication.go:346] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0601 11:08:17.177774       1 authentication.go:347] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0601 11:08:17.267601       1 server.go:139] "Starting Kubernetes Scheduler" version="v1.23.6"
	I0601 11:08:17.270560       1 secure_serving.go:200] Serving securely on 127.0.0.1:10259
	I0601 11:08:17.270741       1 configmap_cafile_content.go:201] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0601 11:08:17.270765       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0601 11:08:17.270814       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0601 11:08:17.373893       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Wed 2022-06-01 11:06:38 UTC, ends at Wed 2022-06-01 11:08:37 UTC. --
	Jun 01 11:08:16 pause-20220601110620-7337 kubelet[4285]: E0601 11:08:16.419084    4285 kubelet.go:2461] "Error getting node" err="node \"pause-20220601110620-7337\" not found"
	Jun 01 11:08:16 pause-20220601110620-7337 kubelet[4285]: E0601 11:08:16.520079    4285 kubelet.go:2461] "Error getting node" err="node \"pause-20220601110620-7337\" not found"
	Jun 01 11:08:16 pause-20220601110620-7337 kubelet[4285]: E0601 11:08:16.621024    4285 kubelet.go:2461] "Error getting node" err="node \"pause-20220601110620-7337\" not found"
	Jun 01 11:08:16 pause-20220601110620-7337 kubelet[4285]: E0601 11:08:16.722066    4285 kubelet.go:2461] "Error getting node" err="node \"pause-20220601110620-7337\" not found"
	Jun 01 11:08:16 pause-20220601110620-7337 kubelet[4285]: E0601 11:08:16.823210    4285 kubelet.go:2461] "Error getting node" err="node \"pause-20220601110620-7337\" not found"
	Jun 01 11:08:16 pause-20220601110620-7337 kubelet[4285]: E0601 11:08:16.923587    4285 kubelet.go:2461] "Error getting node" err="node \"pause-20220601110620-7337\" not found"
	Jun 01 11:08:17 pause-20220601110620-7337 kubelet[4285]: E0601 11:08:17.024564    4285 kubelet.go:2461] "Error getting node" err="node \"pause-20220601110620-7337\" not found"
	Jun 01 11:08:17 pause-20220601110620-7337 kubelet[4285]: E0601 11:08:17.125785    4285 kubelet.go:2461] "Error getting node" err="node \"pause-20220601110620-7337\" not found"
	Jun 01 11:08:17 pause-20220601110620-7337 kubelet[4285]: I0601 11:08:17.227138    4285 kuberuntime_manager.go:1105] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jun 01 11:08:17 pause-20220601110620-7337 kubelet[4285]: I0601 11:08:17.228534    4285 kubelet_network.go:76] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jun 01 11:08:17 pause-20220601110620-7337 kubelet[4285]: I0601 11:08:17.326219    4285 kubelet_node_status.go:108] "Node was previously registered" node="pause-20220601110620-7337"
	Jun 01 11:08:17 pause-20220601110620-7337 kubelet[4285]: I0601 11:08:17.326421    4285 kubelet_node_status.go:73] "Successfully registered node" node="pause-20220601110620-7337"
	Jun 01 11:08:17 pause-20220601110620-7337 kubelet[4285]: I0601 11:08:17.939835    4285 apiserver.go:52] "Watching apiserver"
	Jun 01 11:08:17 pause-20220601110620-7337 kubelet[4285]: I0601 11:08:17.943164    4285 topology_manager.go:200] "Topology Admit Handler"
	Jun 01 11:08:17 pause-20220601110620-7337 kubelet[4285]: I0601 11:08:17.943289    4285 topology_manager.go:200] "Topology Admit Handler"
	Jun 01 11:08:18 pause-20220601110620-7337 kubelet[4285]: I0601 11:08:18.034657    4285 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/57bb2264-4bf6-4bf6-8d33-a600f8a192a4-xtables-lock\") pod \"kube-proxy-khg8x\" (UID: \"57bb2264-4bf6-4bf6-8d33-a600f8a192a4\") " pod="kube-system/kube-proxy-khg8x"
	Jun 01 11:08:18 pause-20220601110620-7337 kubelet[4285]: I0601 11:08:18.034735    4285 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/57bb2264-4bf6-4bf6-8d33-a600f8a192a4-lib-modules\") pod \"kube-proxy-khg8x\" (UID: \"57bb2264-4bf6-4bf6-8d33-a600f8a192a4\") " pod="kube-system/kube-proxy-khg8x"
	Jun 01 11:08:18 pause-20220601110620-7337 kubelet[4285]: I0601 11:08:18.034762    4285 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8dm2\" (UniqueName: \"kubernetes.io/projected/57bb2264-4bf6-4bf6-8d33-a600f8a192a4-kube-api-access-k8dm2\") pod \"kube-proxy-khg8x\" (UID: \"57bb2264-4bf6-4bf6-8d33-a600f8a192a4\") " pod="kube-system/kube-proxy-khg8x"
	Jun 01 11:08:18 pause-20220601110620-7337 kubelet[4285]: I0601 11:08:18.034786    4285 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/33da1e8b-2c7a-4988-9dfe-3162061c879e-config-volume\") pod \"coredns-64897985d-cfd9b\" (UID: \"33da1e8b-2c7a-4988-9dfe-3162061c879e\") " pod="kube-system/coredns-64897985d-cfd9b"
	Jun 01 11:08:18 pause-20220601110620-7337 kubelet[4285]: I0601 11:08:18.034804    4285 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/57bb2264-4bf6-4bf6-8d33-a600f8a192a4-kube-proxy\") pod \"kube-proxy-khg8x\" (UID: \"57bb2264-4bf6-4bf6-8d33-a600f8a192a4\") " pod="kube-system/kube-proxy-khg8x"
	Jun 01 11:08:18 pause-20220601110620-7337 kubelet[4285]: I0601 11:08:18.034824    4285 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z58fl\" (UniqueName: \"kubernetes.io/projected/33da1e8b-2c7a-4988-9dfe-3162061c879e-kube-api-access-z58fl\") pod \"coredns-64897985d-cfd9b\" (UID: \"33da1e8b-2c7a-4988-9dfe-3162061c879e\") " pod="kube-system/coredns-64897985d-cfd9b"
	Jun 01 11:08:18 pause-20220601110620-7337 kubelet[4285]: I0601 11:08:18.034832    4285 reconciler.go:157] "Reconciler: start to sync state"
	Jun 01 11:08:32 pause-20220601110620-7337 kubelet[4285]: I0601 11:08:32.522902    4285 topology_manager.go:200] "Topology Admit Handler"
	Jun 01 11:08:32 pause-20220601110620-7337 kubelet[4285]: I0601 11:08:32.658747    4285 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/f30ddd6b-0b90-4a3a-88d9-ea548cf1fb27-tmp\") pod \"storage-provisioner\" (UID: \"f30ddd6b-0b90-4a3a-88d9-ea548cf1fb27\") " pod="kube-system/storage-provisioner"
	Jun 01 11:08:32 pause-20220601110620-7337 kubelet[4285]: I0601 11:08:32.658930    4285 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4j8k\" (UniqueName: \"kubernetes.io/projected/f30ddd6b-0b90-4a3a-88d9-ea548cf1fb27-kube-api-access-j4j8k\") pod \"storage-provisioner\" (UID: \"f30ddd6b-0b90-4a3a-88d9-ea548cf1fb27\") " pod="kube-system/storage-provisioner"
	
	* 
	* ==> storage-provisioner [4fecce19f81a84509183048e804a408cf72b6e089e3c52436d0d708b223d1260] <==
	* I0601 11:08:33.463848       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0601 11:08:33.498348       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0601 11:08:33.499236       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0601 11:08:33.513431       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0601 11:08:33.513756       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_pause-20220601110620-7337_3e979037-e188-4041-ba26-ffaad90b4b1d!
	I0601 11:08:33.516413       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ad23f3b0-dcbd-4dd9-83a3-2484c24c9c05", APIVersion:"v1", ResourceVersion:"580", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' pause-20220601110620-7337_3e979037-e188-4041-ba26-ffaad90b4b1d became leader
	I0601 11:08:33.614408       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_pause-20220601110620-7337_3e979037-e188-4041-ba26-ffaad90b4b1d!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-20220601110620-7337 -n pause-20220601110620-7337
helpers_test.go:261: (dbg) Run:  kubectl --context pause-20220601110620-7337 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: 
helpers_test.go:272: ======> post-mortem[TestPause/serial/SecondStartNoReconfiguration]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context pause-20220601110620-7337 describe pod 
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context pause-20220601110620-7337 describe pod : exit status 1 (40.082031ms)

                                                
                                                
** stderr ** 
	error: resource name may not be empty

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context pause-20220601110620-7337 describe pod : exit status 1
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (43.13s)

                                                
                                    

Test pass (254/287)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 8.11
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.08
10 TestDownloadOnly/v1.23.6/json-events 6.03
11 TestDownloadOnly/v1.23.6/preload-exists 0
15 TestDownloadOnly/v1.23.6/LogsDuration 0.08
16 TestDownloadOnly/DeleteAll 0.21
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.2
19 TestBinaryMirror 0.56
20 TestOffline 113.96
22 TestAddons/Setup 149.57
24 TestAddons/parallel/Registry 19.28
25 TestAddons/parallel/Ingress 29.98
26 TestAddons/parallel/MetricsServer 5.75
27 TestAddons/parallel/HelmTiller 13.12
29 TestAddons/parallel/CSI 43.86
31 TestAddons/serial/GCPAuth 39.79
32 TestAddons/StoppedEnableDisable 92.55
33 TestCertOptions 94.01
34 TestCertExpiration 318.18
36 TestForceSystemdFlag 98.46
37 TestForceSystemdEnv 79.29
38 TestKVMDriverInstallOrUpdate 4.56
42 TestErrorSpam/setup 62.49
43 TestErrorSpam/start 0.37
44 TestErrorSpam/status 0.73
45 TestErrorSpam/pause 2.85
46 TestErrorSpam/unpause 1.4
47 TestErrorSpam/stop 5.51
50 TestFunctional/serial/CopySyncFile 0
51 TestFunctional/serial/StartWithProxy 78.94
52 TestFunctional/serial/AuditLog 0
53 TestFunctional/serial/SoftStart 26.59
54 TestFunctional/serial/KubeContext 0.04
55 TestFunctional/serial/KubectlGetPods 0.16
58 TestFunctional/serial/CacheCmd/cache/add_remote 3.81
59 TestFunctional/serial/CacheCmd/cache/add_local 2.32
60 TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 0.06
61 TestFunctional/serial/CacheCmd/cache/list 0.06
62 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.23
63 TestFunctional/serial/CacheCmd/cache/cache_reload 1.93
64 TestFunctional/serial/CacheCmd/cache/delete 0.12
65 TestFunctional/serial/MinikubeKubectlCmd 0.11
66 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
67 TestFunctional/serial/ExtraConfig 36.84
68 TestFunctional/serial/ComponentHealth 0.06
69 TestFunctional/serial/LogsCmd 1.22
70 TestFunctional/serial/LogsFileCmd 1.24
72 TestFunctional/parallel/ConfigCmd 0.47
73 TestFunctional/parallel/DashboardCmd 14.01
74 TestFunctional/parallel/DryRun 0.36
75 TestFunctional/parallel/InternationalLanguage 0.17
76 TestFunctional/parallel/StatusCmd 0.91
79 TestFunctional/parallel/ServiceCmd 12.7
80 TestFunctional/parallel/ServiceCmdConnect 10.53
81 TestFunctional/parallel/AddonsCmd 0.21
82 TestFunctional/parallel/PersistentVolumeClaim 45.99
84 TestFunctional/parallel/SSHCmd 0.49
85 TestFunctional/parallel/CpCmd 0.89
86 TestFunctional/parallel/MySQL 35.34
87 TestFunctional/parallel/FileSync 0.33
88 TestFunctional/parallel/CertSync 1.57
92 TestFunctional/parallel/NodeLabels 0.06
94 TestFunctional/parallel/NonActiveRuntimeDisabled 0.47
96 TestFunctional/parallel/ProfileCmd/profile_not_create 0.43
105 TestFunctional/parallel/ProfileCmd/profile_list 0.35
106 TestFunctional/parallel/MountCmd/any-port 10.69
107 TestFunctional/parallel/ProfileCmd/profile_json_output 0.31
108 TestFunctional/parallel/MountCmd/specific-port 1.76
109 TestFunctional/parallel/Version/short 0.07
110 TestFunctional/parallel/Version/components 0.68
111 TestFunctional/parallel/ImageCommands/ImageListShort 0.29
112 TestFunctional/parallel/ImageCommands/ImageListTable 0.32
113 TestFunctional/parallel/ImageCommands/ImageListJson 0.28
114 TestFunctional/parallel/ImageCommands/ImageListYaml 0.3
115 TestFunctional/parallel/ImageCommands/ImageBuild 4.68
116 TestFunctional/parallel/ImageCommands/Setup 1.54
117 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.98
118 TestFunctional/parallel/UpdateContextCmd/no_changes 0.11
119 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.12
120 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.11
121 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 6.96
122 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 5.74
123 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.36
124 TestFunctional/parallel/ImageCommands/ImageRemove 0.56
125 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.76
126 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.75
127 TestFunctional/delete_addon-resizer_images 0.09
128 TestFunctional/delete_my-image_image 0.03
129 TestFunctional/delete_minikube_cached_images 0.03
132 TestIngressAddonLegacy/StartLegacyK8sCluster 80.92
134 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 15.21
135 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.38
136 TestIngressAddonLegacy/serial/ValidateIngressAddons 37.71
139 TestJSONOutput/start/Command 114.22
140 TestJSONOutput/start/Audit 0
142 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
143 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
145 TestJSONOutput/pause/Command 0.62
146 TestJSONOutput/pause/Audit 0
148 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
149 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
151 TestJSONOutput/unpause/Command 0.6
152 TestJSONOutput/unpause/Audit 0
154 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
155 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
157 TestJSONOutput/stop/Command 7.1
158 TestJSONOutput/stop/Audit 0
160 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
161 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
162 TestErrorJSONOutput 0.28
166 TestMainNoArgs 0.06
167 TestMinikubeProfile 124.68
170 TestMountStart/serial/StartWithMountFirst 26.96
171 TestMountStart/serial/VerifyMountFirst 0.4
172 TestMountStart/serial/StartWithMountSecond 26.92
173 TestMountStart/serial/VerifyMountSecond 0.41
174 TestMountStart/serial/DeleteFirst 1.16
175 TestMountStart/serial/VerifyMountPostDelete 0.41
176 TestMountStart/serial/Stop 1.2
177 TestMountStart/serial/RestartStopped 22.05
178 TestMountStart/serial/VerifyMountPostStop 0.39
181 TestMultiNode/serial/FreshStart2Nodes 146.84
182 TestMultiNode/serial/DeployApp2Nodes 5.16
183 TestMultiNode/serial/PingHostFrom2Pods 0.87
184 TestMultiNode/serial/AddNode 59.95
185 TestMultiNode/serial/ProfileList 0.23
186 TestMultiNode/serial/CopyFile 7.64
187 TestMultiNode/serial/StopNode 2.23
188 TestMultiNode/serial/StartAfterStop 48.24
189 TestMultiNode/serial/RestartKeepsNodes 513.95
190 TestMultiNode/serial/DeleteNode 2.17
191 TestMultiNode/serial/StopMultiNode 184.17
192 TestMultiNode/serial/RestartMultiNode 233.17
193 TestMultiNode/serial/ValidateNameConflict 62.83
198 TestPreload 174.82
200 TestScheduledStopUnix 132.45
204 TestRunningBinaryUpgrade 160.83
206 TestKubernetesUpgrade 245.04
208 TestStoppedBinaryUpgrade/Setup 0.58
218 TestPause/serial/Start 94.43
220 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
221 TestNoKubernetes/serial/StartWithK8s 66.77
223 TestNoKubernetes/serial/StartWithStopK8s 10.74
224 TestNoKubernetes/serial/Start 26.59
232 TestNetworkPlugins/group/false 0.37
236 TestNoKubernetes/serial/VerifyK8sNotRunning 0.21
237 TestNoKubernetes/serial/ProfileList 0.69
238 TestNoKubernetes/serial/Stop 1.23
239 TestNoKubernetes/serial/StartNoArgs 66.96
240 TestStoppedBinaryUpgrade/MinikubeLogs 0.58
241 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.21
243 TestStartStop/group/old-k8s-version/serial/FirstStart 183.51
245 TestStartStop/group/embed-certs/serial/FirstStart 171.68
247 TestStartStop/group/no-preload/serial/FirstStart 127.79
248 TestStartStop/group/old-k8s-version/serial/DeployApp 10.54
249 TestStartStop/group/embed-certs/serial/DeployApp 8.48
250 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.67
251 TestStartStop/group/old-k8s-version/serial/Stop 92.37
252 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.75
253 TestStartStop/group/embed-certs/serial/Stop 91.94
254 TestStartStop/group/no-preload/serial/DeployApp 10.45
255 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.69
256 TestStartStop/group/no-preload/serial/Stop 92.47
258 TestStartStop/group/default-k8s-different-port/serial/FirstStart 120.36
259 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.18
260 TestStartStop/group/old-k8s-version/serial/SecondStart 476.41
261 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.26
262 TestStartStop/group/embed-certs/serial/SecondStart 430
263 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.21
264 TestStartStop/group/no-preload/serial/SecondStart 339.73
265 TestStartStop/group/default-k8s-different-port/serial/DeployApp 9.5
266 TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive 0.71
267 TestStartStop/group/default-k8s-different-port/serial/Stop 92.38
268 TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop 0.19
269 TestStartStop/group/default-k8s-different-port/serial/SecondStart 403.05
270 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 9.02
271 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
272 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.24
273 TestStartStop/group/no-preload/serial/Pause 2.48
275 TestStartStop/group/newest-cni/serial/FirstStart 73.61
276 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 15.02
277 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.07
278 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.25
279 TestStartStop/group/embed-certs/serial/Pause 2.37
280 TestNetworkPlugins/group/auto/Start 115.26
281 TestStartStop/group/newest-cni/serial/DeployApp 0
282 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.78
283 TestStartStop/group/newest-cni/serial/Stop 4.13
284 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.18
285 TestStartStop/group/newest-cni/serial/SecondStart 81.37
286 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.02
287 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.09
288 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.25
289 TestStartStop/group/old-k8s-version/serial/Pause 4.08
290 TestNetworkPlugins/group/kindnet/Start 102.58
291 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
292 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
293 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.26
294 TestStartStop/group/newest-cni/serial/Pause 2.16
295 TestNetworkPlugins/group/cilium/Start 119.16
296 TestNetworkPlugins/group/auto/KubeletFlags 0.23
297 TestNetworkPlugins/group/auto/NetCatPod 12.4
298 TestNetworkPlugins/group/auto/DNS 0.28
299 TestNetworkPlugins/group/auto/Localhost 0.22
300 TestNetworkPlugins/group/auto/HairPin 0.22
301 TestNetworkPlugins/group/calico/Start 113.41
302 TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop 14.02
303 TestNetworkPlugins/group/kindnet/ControllerPod 5.03
304 TestNetworkPlugins/group/kindnet/KubeletFlags 0.24
305 TestNetworkPlugins/group/kindnet/NetCatPod 11.57
306 TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop 5.12
307 TestNetworkPlugins/group/kindnet/DNS 0.19
308 TestNetworkPlugins/group/kindnet/Localhost 0.14
309 TestNetworkPlugins/group/kindnet/HairPin 0.18
310 TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages 0.74
311 TestStartStop/group/default-k8s-different-port/serial/Pause 3.03
312 TestNetworkPlugins/group/custom-flannel/Start 89.6
313 TestNetworkPlugins/group/flannel/Start 97.21
314 TestNetworkPlugins/group/cilium/ControllerPod 5.15
315 TestNetworkPlugins/group/cilium/KubeletFlags 0.27
316 TestNetworkPlugins/group/cilium/NetCatPod 13.55
317 TestNetworkPlugins/group/cilium/DNS 0.26
318 TestNetworkPlugins/group/cilium/Localhost 0.19
319 TestNetworkPlugins/group/cilium/HairPin 0.19
320 TestNetworkPlugins/group/bridge/Start 80.71
321 TestNetworkPlugins/group/calico/ControllerPod 5.02
322 TestNetworkPlugins/group/calico/KubeletFlags 0.23
323 TestNetworkPlugins/group/calico/NetCatPod 13.52
324 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.26
325 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.5
326 TestNetworkPlugins/group/custom-flannel/DNS 0.18
327 TestNetworkPlugins/group/custom-flannel/Localhost 0.15
328 TestNetworkPlugins/group/custom-flannel/HairPin 0.13
329 TestNetworkPlugins/group/calico/DNS 0.24
330 TestNetworkPlugins/group/enable-default-cni/Start 78.89
331 TestNetworkPlugins/group/calico/Localhost 0.16
332 TestNetworkPlugins/group/calico/HairPin 0.17
333 TestNetworkPlugins/group/flannel/ControllerPod 5.02
334 TestNetworkPlugins/group/flannel/KubeletFlags 0.21
335 TestNetworkPlugins/group/flannel/NetCatPod 16.45
336 TestNetworkPlugins/group/flannel/DNS 0.15
337 TestNetworkPlugins/group/flannel/Localhost 0.13
338 TestNetworkPlugins/group/flannel/HairPin 0.15
339 TestNetworkPlugins/group/bridge/KubeletFlags 0.21
340 TestNetworkPlugins/group/bridge/NetCatPod 10.42
341 TestNetworkPlugins/group/bridge/DNS 0.17
342 TestNetworkPlugins/group/bridge/Localhost 0.13
343 TestNetworkPlugins/group/bridge/HairPin 0.12
344 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.21
345 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.42
346 TestNetworkPlugins/group/enable-default-cni/DNS 0.16
347 TestNetworkPlugins/group/enable-default-cni/Localhost 0.13
348 TestNetworkPlugins/group/enable-default-cni/HairPin 0.12
x
+
TestDownloadOnly/v1.16.0/json-events (8.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-20220601102001-7337 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd
aaa_download_only_test.go:71: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-20220601102001-7337 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd: (8.107023727s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (8.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-20220601102001-7337
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-20220601102001-7337: exit status 85 (76.611492ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/06/01 10:20:01
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.18.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0601 10:20:01.294635    7350 out.go:296] Setting OutFile to fd 1 ...
	I0601 10:20:01.294761    7350 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 10:20:01.294776    7350 out.go:309] Setting ErrFile to fd 2...
	I0601 10:20:01.294784    7350 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 10:20:01.294912    7350 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/bin
	W0601 10:20:01.295062    7350 root.go:300] Error reading config file at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/config/config.json: open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/config/config.json: no such file or directory
	I0601 10:20:01.295285    7350 out.go:303] Setting JSON to true
	I0601 10:20:01.296149    7350 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":156,"bootTime":1654078646,"procs":192,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1027-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0601 10:20:01.296222    7350 start.go:125] virtualization: kvm guest
	I0601 10:20:01.299023    7350 out.go:97] [download-only-20220601102001-7337] minikube v1.26.0-beta.1 on Ubuntu 20.04 (kvm/amd64)
	I0601 10:20:01.299119    7350 notify.go:193] Checking for updates...
	W0601 10:20:01.299156    7350 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball: no such file or directory
	I0601 10:20:01.300817    7350 out.go:169] MINIKUBE_LOCATION=14079
	I0601 10:20:01.302403    7350 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0601 10:20:01.303987    7350 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 10:20:01.305496    7350 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube
	I0601 10:20:01.307212    7350 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0601 10:20:01.309984    7350 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0601 10:20:01.310146    7350 driver.go:358] Setting default libvirt URI to qemu:///system
	I0601 10:20:01.405784    7350 out.go:97] Using the kvm2 driver based on user configuration
	I0601 10:20:01.405808    7350 start.go:284] selected driver: kvm2
	I0601 10:20:01.405814    7350 start.go:806] validating driver "kvm2" against <nil>
	I0601 10:20:01.406204    7350 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 10:20:01.406410    7350 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0601 10:20:01.420882    7350 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.26.0-beta.1
	I0601 10:20:01.420956    7350 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0601 10:20:01.421402    7350 start_flags.go:373] Using suggested 6000MB memory alloc based on sys=32103MB, container=0MB
	I0601 10:20:01.421498    7350 start_flags.go:829] Wait components to verify : map[apiserver:true system_pods:true]
	I0601 10:20:01.421532    7350 cni.go:95] Creating CNI manager for ""
	I0601 10:20:01.421541    7350 cni.go:165] "kvm2" driver + containerd runtime found, recommending bridge
	I0601 10:20:01.421550    7350 start_flags.go:301] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0601 10:20:01.421560    7350 start_flags.go:306] config:
	{Name:download-only-20220601102001-7337 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-20220601102001-7337 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:c
luster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 10:20:01.421738    7350 iso.go:128] acquiring lock: {Name:mkad95a9aa9919c9e63cafd3e91a2bd2bcafb74e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 10:20:01.423932    7350 out.go:97] Downloading VM boot image ...
	I0601 10:20:01.423963    7350 download.go:101] Downloading: https://storage.googleapis.com/minikube-builds/iso/13807/minikube-v1.26.0-1653677468-13807-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/13807/minikube-v1.26.0-1653677468-13807-amd64.iso.sha256 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/iso/amd64/minikube-v1.26.0-1653677468-13807-amd64.iso
	I0601 10:20:04.149316    7350 out.go:97] Starting control plane node download-only-20220601102001-7337 in cluster download-only-20220601102001-7337
	I0601 10:20:04.149335    7350 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I0601 10:20:04.257878    7350 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4
	I0601 10:20:04.257918    7350 cache.go:57] Caching tarball of preloaded images
	I0601 10:20:04.258096    7350 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I0601 10:20:04.260134    7350 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0601 10:20:04.260156    7350 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4 ...
	I0601 10:20:04.366759    7350 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4?checksum=md5:d96a2b2afa188e17db7ddabb58d563fd -> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20220601102001-7337"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.6/json-events (6.03s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.6/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-20220601102001-7337 --force --alsologtostderr --kubernetes-version=v1.23.6 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd
aaa_download_only_test.go:71: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-20220601102001-7337 --force --alsologtostderr --kubernetes-version=v1.23.6 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd: (6.033947774s)
--- PASS: TestDownloadOnly/v1.23.6/json-events (6.03s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.6/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.6/preload-exists
--- PASS: TestDownloadOnly/v1.23.6/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.6/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.6/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-20220601102001-7337
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-20220601102001-7337: exit status 85 (76.648958ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/06/01 10:20:09
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.18.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0601 10:20:09.480331    7387 out.go:296] Setting OutFile to fd 1 ...
	I0601 10:20:09.480497    7387 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 10:20:09.480507    7387 out.go:309] Setting ErrFile to fd 2...
	I0601 10:20:09.480514    7387 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 10:20:09.480636    7387 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/bin
	W0601 10:20:09.480759    7387 root.go:300] Error reading config file at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/config/config.json: open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/config/config.json: no such file or directory
	I0601 10:20:09.480891    7387 out.go:303] Setting JSON to true
	I0601 10:20:09.481631    7387 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":164,"bootTime":1654078646,"procs":192,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1027-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0601 10:20:09.481691    7387 start.go:125] virtualization: kvm guest
	I0601 10:20:09.484251    7387 out.go:97] [download-only-20220601102001-7337] minikube v1.26.0-beta.1 on Ubuntu 20.04 (kvm/amd64)
	I0601 10:20:09.484371    7387 notify.go:193] Checking for updates...
	I0601 10:20:09.485934    7387 out.go:169] MINIKUBE_LOCATION=14079
	I0601 10:20:09.487583    7387 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0601 10:20:09.489115    7387 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 10:20:09.490538    7387 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube
	I0601 10:20:09.491942    7387 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0601 10:20:09.494572    7387 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0601 10:20:09.495072    7387 config.go:178] Loaded profile config "download-only-20220601102001-7337": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	W0601 10:20:09.495124    7387 start.go:714] api.Load failed for download-only-20220601102001-7337: filestore "download-only-20220601102001-7337": Docker machine "download-only-20220601102001-7337" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0601 10:20:09.495181    7387 driver.go:358] Setting default libvirt URI to qemu:///system
	W0601 10:20:09.495218    7387 start.go:714] api.Load failed for download-only-20220601102001-7337: filestore "download-only-20220601102001-7337": Docker machine "download-only-20220601102001-7337" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0601 10:20:09.526198    7387 out.go:97] Using the kvm2 driver based on existing profile
	I0601 10:20:09.526230    7387 start.go:284] selected driver: kvm2
	I0601 10:20:09.526234    7387 start.go:806] validating driver "kvm2" against &{Name:download-only-20220601102001-7337 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/13807/minikube-v1.26.0-1653677468-13807-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22
KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-20220601102001-7337 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 10:20:09.526637    7387 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 10:20:09.526839    7387 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0601 10:20:09.541335    7387 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.26.0-beta.1
	I0601 10:20:09.542085    7387 cni.go:95] Creating CNI manager for ""
	I0601 10:20:09.542103    7387 cni.go:165] "kvm2" driver + containerd runtime found, recommending bridge
	I0601 10:20:09.542110    7387 start_flags.go:306] config:
	{Name:download-only-20220601102001-7337 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/13807/minikube-v1.26.0-1653677468-13807-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:download-only-20220601
102001-7337 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false}
	I0601 10:20:09.542242    7387 iso.go:128] acquiring lock: {Name:mkad95a9aa9919c9e63cafd3e91a2bd2bcafb74e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0601 10:20:09.544151    7387 out.go:97] Starting control plane node download-only-20220601102001-7337 in cluster download-only-20220601102001-7337
	I0601 10:20:09.544168    7387 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime containerd
	I0601 10:20:09.653664    7387 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.23.6/preloaded-images-k8s-v18-v1.23.6-containerd-overlay2-amd64.tar.lz4
	I0601 10:20:09.653695    7387 cache.go:57] Caching tarball of preloaded images
	I0601 10:20:09.653860    7387 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime containerd
	I0601 10:20:09.656333    7387 out.go:97] Downloading Kubernetes v1.23.6 preload ...
	I0601 10:20:09.656355    7387 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.23.6-containerd-overlay2-amd64.tar.lz4 ...
	I0601 10:20:09.761038    7387 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.23.6/preloaded-images-k8s-v18-v1.23.6-containerd-overlay2-amd64.tar.lz4?checksum=md5:af5c6eac9f26fa4c647c193efff8a3b0 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-containerd-overlay2-amd64.tar.lz4
	I0601 10:20:12.764721    7387 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.23.6-containerd-overlay2-amd64.tar.lz4 ...
	I0601 10:20:12.764825    7387 preload.go:256] verifying checksumm of /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-containerd-overlay2-amd64.tar.lz4 ...
	I0601 10:20:13.764692    7387 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6 on containerd
	I0601 10:20:13.764813    7387 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/download-only-20220601102001-7337/config.json ...
	I0601 10:20:13.764991    7387 preload.go:132] Checking if preload exists for k8s version v1.23.6 and runtime containerd
	I0601 10:20:13.765193    7387 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.23.6/bin/linux/amd64/kubectl?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.23.6/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/cache/linux/amd64/v1.23.6/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20220601102001-7337"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.23.6/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:191: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:203: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-20220601102001-7337
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.20s)

                                                
                                    
x
+
TestBinaryMirror (0.56s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:310: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-20220601102016-7337 --alsologtostderr --binary-mirror http://127.0.0.1:37005 --driver=kvm2  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-20220601102016-7337" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-20220601102016-7337
--- PASS: TestBinaryMirror (0.56s)

                                                
                                    
x
+
TestOffline (113.96s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-containerd-20220601110426-7337 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=containerd

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-containerd-20220601110426-7337 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=containerd: (1m52.417953737s)
helpers_test.go:175: Cleaning up "offline-containerd-20220601110426-7337" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-containerd-20220601110426-7337
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-containerd-20220601110426-7337: (1.541786328s)
--- PASS: TestOffline (113.96s)

                                                
                                    
x
+
TestAddons/Setup (149.57s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:75: (dbg) Run:  out/minikube-linux-amd64 start -p addons-20220601102016-7337 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --driver=kvm2  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:75: (dbg) Done: out/minikube-linux-amd64 start -p addons-20220601102016-7337 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --driver=kvm2  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m29.568135346s)
--- PASS: TestAddons/Setup (149.57s)

                                                
                                    
x
+
TestAddons/parallel/Registry (19.28s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:280: registry stabilized in 19.229225ms

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:282: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/Registry
helpers_test.go:342: "registry-5zqns" [ad471836-cb94-421a-bce2-bafe132e6981] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:282: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.019607006s
addons_test.go:285: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/Registry
helpers_test.go:342: "registry-proxy-6v54v" [f5ab5ad8-0256-473d-a39f-d75d18d891c7] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:285: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.009184182s
addons_test.go:290: (dbg) Run:  kubectl --context addons-20220601102016-7337 delete po -l run=registry-test --now
addons_test.go:295: (dbg) Run:  kubectl --context addons-20220601102016-7337 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:295: (dbg) Done: kubectl --context addons-20220601102016-7337 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (8.601956951s)
addons_test.go:309: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220601102016-7337 ip
2022/06/01 10:23:05 [DEBUG] GET http://192.168.50.152:5000
addons_test.go:338: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220601102016-7337 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (19.28s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (29.98s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:162: (dbg) Run:  kubectl --context addons-20220601102016-7337 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:162: (dbg) Done: kubectl --context addons-20220601102016-7337 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (9.675538815s)
addons_test.go:182: (dbg) Run:  kubectl --context addons-20220601102016-7337 replace --force -f testdata/nginx-ingress-v1.yaml

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:195: (dbg) Run:  kubectl --context addons-20220601102016-7337 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:200: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:342: "nginx" [8474c68c-aed1-41b4-9de5-16b2efc9d870] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
helpers_test.go:342: "nginx" [8474c68c-aed1-41b4-9de5-16b2efc9d870] Running

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:200: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.027009338s
addons_test.go:212: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220601102016-7337 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:236: (dbg) Run:  kubectl --context addons-20220601102016-7337 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:241: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220601102016-7337 ip
addons_test.go:247: (dbg) Run:  nslookup hello-john.test 192.168.50.152
addons_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220601102016-7337 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220601102016-7337 addons disable ingress --alsologtostderr -v=1

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:261: (dbg) Done: out/minikube-linux-amd64 -p addons-20220601102016-7337 addons disable ingress --alsologtostderr -v=1: (7.604018068s)
--- PASS: TestAddons/parallel/Ingress (29.98s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.75s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:357: metrics-server stabilized in 19.807197ms

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:359: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
helpers_test.go:342: "metrics-server-bd6f4dd56-vwjrn" [4134ac2a-09bc-4704-8062-3e995a659a03] Running

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:359: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.021455199s
addons_test.go:365: (dbg) Run:  kubectl --context addons-20220601102016-7337 top pods -n kube-system

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220601102016-7337 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.75s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (13.12s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:406: tiller-deploy stabilized in 19.967129ms

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:408: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
helpers_test.go:342: "tiller-deploy-6d67d5465d-wq56v" [d17d4953-b480-4b94-bf90-838a5d7d3e78] Running

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:408: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.015670088s
addons_test.go:423: (dbg) Run:  kubectl --context addons-20220601102016-7337 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:423: (dbg) Done: kubectl --context addons-20220601102016-7337 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version: (7.628103864s)
addons_test.go:428: kubectl --context addons-20220601102016-7337 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version: unexpected stderr: Unable to use a TTY - input is not a terminal or the right kind of file
If you don't see a command prompt, try pressing enter.
Error attaching, falling back to logs: 
addons_test.go:440: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220601102016-7337 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (13.12s)

                                                
                                    
x
+
TestAddons/parallel/CSI (43.86s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:511: csi-hostpath-driver pods stabilized in 27.814388ms
addons_test.go:514: (dbg) Run:  kubectl --context addons-20220601102016-7337 create -f testdata/csi-hostpath-driver/pvc.yaml

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:519: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:392: (dbg) Run:  kubectl --context addons-20220601102016-7337 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:392: (dbg) Run:  kubectl --context addons-20220601102016-7337 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:524: (dbg) Run:  kubectl --context addons-20220601102016-7337 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:529: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:342: "task-pv-pod" [33ac6b35-7d81-48e8-ac14-2661f2072787] Pending
helpers_test.go:342: "task-pv-pod" [33ac6b35-7d81-48e8-ac14-2661f2072787] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:342: "task-pv-pod" [33ac6b35-7d81-48e8-ac14-2661f2072787] Running

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:529: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 20.015694486s
addons_test.go:534: (dbg) Run:  kubectl --context addons-20220601102016-7337 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:539: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:417: (dbg) Run:  kubectl --context addons-20220601102016-7337 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:417: (dbg) Run:  kubectl --context addons-20220601102016-7337 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:544: (dbg) Run:  kubectl --context addons-20220601102016-7337 delete pod task-pv-pod
addons_test.go:550: (dbg) Run:  kubectl --context addons-20220601102016-7337 delete pvc hpvc
addons_test.go:556: (dbg) Run:  kubectl --context addons-20220601102016-7337 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:561: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:392: (dbg) Run:  kubectl --context addons-20220601102016-7337 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:392: (dbg) Run:  kubectl --context addons-20220601102016-7337 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:566: (dbg) Run:  kubectl --context addons-20220601102016-7337 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:571: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:342: "task-pv-pod-restore" [920ba48f-7cf5-4f55-91bb-2af19eaa3bf5] Pending

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:342: "task-pv-pod-restore" [920ba48f-7cf5-4f55-91bb-2af19eaa3bf5] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:342: "task-pv-pod-restore" [920ba48f-7cf5-4f55-91bb-2af19eaa3bf5] Running
addons_test.go:571: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 10.017618353s
addons_test.go:576: (dbg) Run:  kubectl --context addons-20220601102016-7337 delete pod task-pv-pod-restore

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:580: (dbg) Run:  kubectl --context addons-20220601102016-7337 delete pvc hpvc-restore
addons_test.go:584: (dbg) Run:  kubectl --context addons-20220601102016-7337 delete volumesnapshot new-snapshot-demo
addons_test.go:588: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220601102016-7337 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:588: (dbg) Done: out/minikube-linux-amd64 -p addons-20220601102016-7337 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.859023684s)
addons_test.go:592: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220601102016-7337 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (43.86s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth (39.79s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth
addons_test.go:603: (dbg) Run:  kubectl --context addons-20220601102016-7337 create -f testdata/busybox.yaml
addons_test.go:609: (dbg) TestAddons/serial/GCPAuth: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [e1b1db1f-baec-4a07-80fd-7845d90db388] Pending
helpers_test.go:342: "busybox" [e1b1db1f-baec-4a07-80fd-7845d90db388] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [e1b1db1f-baec-4a07-80fd-7845d90db388] Running
addons_test.go:609: (dbg) TestAddons/serial/GCPAuth: integration-test=busybox healthy within 9.009749506s
addons_test.go:615: (dbg) Run:  kubectl --context addons-20220601102016-7337 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:652: (dbg) Run:  kubectl --context addons-20220601102016-7337 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
addons_test.go:665: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220601102016-7337 addons disable gcp-auth --alsologtostderr -v=1
addons_test.go:665: (dbg) Done: out/minikube-linux-amd64 -p addons-20220601102016-7337 addons disable gcp-auth --alsologtostderr -v=1: (5.721521729s)
addons_test.go:681: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220601102016-7337 addons enable gcp-auth
addons_test.go:687: (dbg) Run:  kubectl --context addons-20220601102016-7337 apply -f testdata/private-image.yaml
addons_test.go:694: (dbg) TestAddons/serial/GCPAuth: waiting 8m0s for pods matching "integration-test=private-image" in namespace "default" ...
helpers_test.go:342: "private-image-7f8587d5b7-z8sk4" [61f0ccc2-699d-4e9b-aaea-d27866ff7120] Pending / Ready:ContainersNotReady (containers with unready status: [private-image]) / ContainersReady:ContainersNotReady (containers with unready status: [private-image])
helpers_test.go:342: "private-image-7f8587d5b7-z8sk4" [61f0ccc2-699d-4e9b-aaea-d27866ff7120] Running
addons_test.go:694: (dbg) TestAddons/serial/GCPAuth: integration-test=private-image healthy within 15.006090128s
addons_test.go:700: (dbg) Run:  kubectl --context addons-20220601102016-7337 apply -f testdata/private-image-eu.yaml
addons_test.go:705: (dbg) TestAddons/serial/GCPAuth: waiting 8m0s for pods matching "integration-test=private-image-eu" in namespace "default" ...
helpers_test.go:342: "private-image-eu-869dcfd8c7-5cl8k" [a43b90c7-9dae-4f0a-9160-88d3443c4515] Pending / Ready:ContainersNotReady (containers with unready status: [private-image-eu]) / ContainersReady:ContainersNotReady (containers with unready status: [private-image-eu])
helpers_test.go:342: "private-image-eu-869dcfd8c7-5cl8k" [a43b90c7-9dae-4f0a-9160-88d3443c4515] Running
addons_test.go:705: (dbg) TestAddons/serial/GCPAuth: integration-test=private-image-eu healthy within 8.008519692s
--- PASS: TestAddons/serial/GCPAuth (39.79s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (92.55s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:132: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-20220601102016-7337
addons_test.go:132: (dbg) Done: out/minikube-linux-amd64 stop -p addons-20220601102016-7337: (1m32.355425348s)
addons_test.go:136: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-20220601102016-7337
addons_test.go:140: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-20220601102016-7337
--- PASS: TestAddons/StoppedEnableDisable (92.55s)

                                                
                                    
x
+
TestCertOptions (94.01s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-20220601110956-7337 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=containerd

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-20220601110956-7337 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=containerd: (1m32.196963227s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-20220601110956-7337 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-20220601110956-7337 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-20220601110956-7337 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-20220601110956-7337" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-20220601110956-7337
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-20220601110956-7337: (1.301403417s)
--- PASS: TestCertOptions (94.01s)

                                                
                                    
x
+
TestCertExpiration (318.18s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-20220601110900-7337 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=containerd
E0601 11:09:31.580858    7337 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/functional-20220601102657-7337/client.crt: no such file or directory

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-20220601110900-7337 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=containerd: (2m1.987750071s)

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-20220601110900-7337 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-20220601110900-7337 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=containerd: (15.100958312s)
helpers_test.go:175: Cleaning up "cert-expiration-20220601110900-7337" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-20220601110900-7337
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-20220601110900-7337: (1.095290123s)
--- PASS: TestCertExpiration (318.18s)

                                                
                                    
x
+
TestForceSystemdFlag (98.46s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-20220601110838-7337 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-20220601110838-7337 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (1m37.137026724s)
docker_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-20220601110838-7337 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-20220601110838-7337" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-20220601110838-7337
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-20220601110838-7337: (1.119299386s)
--- PASS: TestForceSystemdFlag (98.46s)

                                                
                                    
x
+
TestForceSystemdEnv (79.29s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:150: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-20220601110836-7337 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:150: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-20220601110836-7337 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (1m17.962248125s)
docker_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-20220601110836-7337 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-20220601110836-7337" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-20220601110836-7337
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-20220601110836-7337: (1.109928392s)
--- PASS: TestForceSystemdEnv (79.29s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (4.56s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (4.56s)

                                                
                                    
x
+
TestErrorSpam/setup (62.49s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:78: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20220601102543-7337 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-20220601102543-7337 --driver=kvm2  --container-runtime=containerd
error_spam_test.go:78: (dbg) Done: out/minikube-linux-amd64 start -p nospam-20220601102543-7337 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-20220601102543-7337 --driver=kvm2  --container-runtime=containerd: (1m2.491158138s)
--- PASS: TestErrorSpam/setup (62.49s)

                                                
                                    
x
+
TestErrorSpam/start (0.37s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:213: Cleaning up 1 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220601102543-7337 --log_dir /tmp/nospam-20220601102543-7337 start --dry-run
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220601102543-7337 --log_dir /tmp/nospam-20220601102543-7337 start --dry-run
error_spam_test.go:179: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220601102543-7337 --log_dir /tmp/nospam-20220601102543-7337 start --dry-run
--- PASS: TestErrorSpam/start (0.37s)

                                                
                                    
x
+
TestErrorSpam/status (0.73s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220601102543-7337 --log_dir /tmp/nospam-20220601102543-7337 status
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220601102543-7337 --log_dir /tmp/nospam-20220601102543-7337 status
error_spam_test.go:179: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220601102543-7337 --log_dir /tmp/nospam-20220601102543-7337 status
--- PASS: TestErrorSpam/status (0.73s)

                                                
                                    
x
+
TestErrorSpam/pause (2.85s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220601102543-7337 --log_dir /tmp/nospam-20220601102543-7337 pause
error_spam_test.go:156: (dbg) Done: out/minikube-linux-amd64 -p nospam-20220601102543-7337 --log_dir /tmp/nospam-20220601102543-7337 pause: (2.006438023s)
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220601102543-7337 --log_dir /tmp/nospam-20220601102543-7337 pause
error_spam_test.go:179: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220601102543-7337 --log_dir /tmp/nospam-20220601102543-7337 pause
--- PASS: TestErrorSpam/pause (2.85s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.4s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220601102543-7337 --log_dir /tmp/nospam-20220601102543-7337 unpause
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220601102543-7337 --log_dir /tmp/nospam-20220601102543-7337 unpause
error_spam_test.go:179: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220601102543-7337 --log_dir /tmp/nospam-20220601102543-7337 unpause
--- PASS: TestErrorSpam/unpause (1.40s)

                                                
                                    
x
+
TestErrorSpam/stop (5.51s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220601102543-7337 --log_dir /tmp/nospam-20220601102543-7337 stop
error_spam_test.go:156: (dbg) Done: out/minikube-linux-amd64 -p nospam-20220601102543-7337 --log_dir /tmp/nospam-20220601102543-7337 stop: (5.338125559s)
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220601102543-7337 --log_dir /tmp/nospam-20220601102543-7337 stop
error_spam_test.go:179: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220601102543-7337 --log_dir /tmp/nospam-20220601102543-7337 stop
--- PASS: TestErrorSpam/stop (5.51s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1781: local sync path: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/files/etc/test/nested/copy/7337/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (78.94s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2160: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20220601102657-7337 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=containerd
E0601 10:27:46.265681    7337 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/addons-20220601102016-7337/client.crt: no such file or directory
E0601 10:27:46.271546    7337 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/addons-20220601102016-7337/client.crt: no such file or directory
E0601 10:27:46.281764    7337 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/addons-20220601102016-7337/client.crt: no such file or directory
E0601 10:27:46.302044    7337 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/addons-20220601102016-7337/client.crt: no such file or directory
E0601 10:27:46.342347    7337 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/addons-20220601102016-7337/client.crt: no such file or directory
E0601 10:27:46.422691    7337 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/addons-20220601102016-7337/client.crt: no such file or directory
E0601 10:27:46.583085    7337 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/addons-20220601102016-7337/client.crt: no such file or directory
E0601 10:27:46.903684    7337 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/addons-20220601102016-7337/client.crt: no such file or directory
E0601 10:27:47.544599    7337 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/addons-20220601102016-7337/client.crt: no such file or directory
E0601 10:27:48.825056    7337 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/addons-20220601102016-7337/client.crt: no such file or directory
E0601 10:27:51.385992    7337 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/addons-20220601102016-7337/client.crt: no such file or directory
E0601 10:27:56.506755    7337 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/addons-20220601102016-7337/client.crt: no such file or directory
E0601 10:28:06.747035    7337 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/addons-20220601102016-7337/client.crt: no such file or directory
functional_test.go:2160: (dbg) Done: out/minikube-linux-amd64 start -p functional-20220601102657-7337 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=containerd: (1m18.935870321s)
--- PASS: TestFunctional/serial/StartWithProxy (78.94s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (26.59s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:651: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20220601102657-7337 --alsologtostderr -v=8
E0601 10:28:27.227348    7337 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/addons-20220601102016-7337/client.crt: no such file or directory
functional_test.go:651: (dbg) Done: out/minikube-linux-amd64 start -p functional-20220601102657-7337 --alsologtostderr -v=8: (26.59237801s)
functional_test.go:655: soft start took 26.59300576s for "functional-20220601102657-7337" cluster.
--- PASS: TestFunctional/serial/SoftStart (26.59s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:673: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:688: (dbg) Run:  kubectl --context functional-20220601102657-7337 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.16s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.81s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1041: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220601102657-7337 cache add k8s.gcr.io/pause:3.1
functional_test.go:1041: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220601102657-7337 cache add k8s.gcr.io/pause:3.3
functional_test.go:1041: (dbg) Done: out/minikube-linux-amd64 -p functional-20220601102657-7337 cache add k8s.gcr.io/pause:3.3: (1.7107818s)
functional_test.go:1041: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220601102657-7337 cache add k8s.gcr.io/pause:latest
functional_test.go:1041: (dbg) Done: out/minikube-linux-amd64 -p functional-20220601102657-7337 cache add k8s.gcr.io/pause:latest: (1.294804927s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.81s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.32s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1069: (dbg) Run:  docker build -t minikube-local-cache-test:functional-20220601102657-7337 /tmp/TestFunctionalserialCacheCmdcacheadd_local1584058716/001
functional_test.go:1081: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220601102657-7337 cache add minikube-local-cache-test:functional-20220601102657-7337
functional_test.go:1081: (dbg) Done: out/minikube-linux-amd64 -p functional-20220601102657-7337 cache add minikube-local-cache-test:functional-20220601102657-7337: (2.033415757s)
functional_test.go:1086: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220601102657-7337 cache delete minikube-local-cache-test:functional-20220601102657-7337
functional_test.go:1075: (dbg) Run:  docker rmi minikube-local-cache-test:functional-20220601102657-7337
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.32s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1116: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220601102657-7337 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.93s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220601102657-7337 ssh sudo crictl rmi k8s.gcr.io/pause:latest
functional_test.go:1145: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220601102657-7337 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
functional_test.go:1145: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220601102657-7337 ssh sudo crictl inspecti k8s.gcr.io/pause:latest: exit status 1 (225.454548ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "k8s.gcr.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1150: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220601102657-7337 cache reload
functional_test.go:1150: (dbg) Done: out/minikube-linux-amd64 -p functional-20220601102657-7337 cache reload: (1.226835598s)
functional_test.go:1155: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220601102657-7337 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.93s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1164: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:3.1
functional_test.go:1164: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:708: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220601102657-7337 kubectl -- --context functional-20220601102657-7337 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:733: (dbg) Run:  out/kubectl --context functional-20220601102657-7337 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (36.84s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:749: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20220601102657-7337 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0601 10:29:08.187618    7337 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/addons-20220601102016-7337/client.crt: no such file or directory
functional_test.go:749: (dbg) Done: out/minikube-linux-amd64 start -p functional-20220601102657-7337 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (36.840665316s)
functional_test.go:753: restart took 36.840764153s for "functional-20220601102657-7337" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (36.84s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:802: (dbg) Run:  kubectl --context functional-20220601102657-7337 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:817: etcd phase: Running
functional_test.go:827: etcd status: Ready
functional_test.go:817: kube-apiserver phase: Running
functional_test.go:827: kube-apiserver status: Ready
functional_test.go:817: kube-controller-manager phase: Running
functional_test.go:827: kube-controller-manager status: Ready
functional_test.go:817: kube-scheduler phase: Running
functional_test.go:827: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.22s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1228: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220601102657-7337 logs
functional_test.go:1228: (dbg) Done: out/minikube-linux-amd64 -p functional-20220601102657-7337 logs: (1.216782143s)
--- PASS: TestFunctional/serial/LogsCmd (1.22s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.24s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1242: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220601102657-7337 logs --file /tmp/TestFunctionalserialLogsFileCmd723100944/001/logs.txt
functional_test.go:1242: (dbg) Done: out/minikube-linux-amd64 -p functional-20220601102657-7337 logs --file /tmp/TestFunctionalserialLogsFileCmd723100944/001/logs.txt: (1.237129783s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.24s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1191: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220601102657-7337 config unset cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1191: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220601102657-7337 config get cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1191: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220601102657-7337 config get cpus: exit status 14 (83.874491ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1191: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220601102657-7337 config set cpus 2

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1191: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220601102657-7337 config get cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1191: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220601102657-7337 config unset cpus
functional_test.go:1191: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220601102657-7337 config get cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1191: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220601102657-7337 config get cpus: exit status 14 (71.790466ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (14.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:897: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-20220601102657-7337 --alsologtostderr -v=1]

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:902: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-20220601102657-7337 --alsologtostderr -v=1] ...
helpers_test.go:506: unable to kill pid 11949: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (14.01s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:966: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20220601102657-7337 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:966: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-20220601102657-7337 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd: exit status 23 (173.981714ms)

                                                
                                                
-- stdout --
	* [functional-20220601102657-7337] minikube v1.26.0-beta.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=14079
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0601 10:29:44.385823   11632 out.go:296] Setting OutFile to fd 1 ...
	I0601 10:29:44.386006   11632 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 10:29:44.386018   11632 out.go:309] Setting ErrFile to fd 2...
	I0601 10:29:44.386025   11632 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 10:29:44.386172   11632 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/bin
	I0601 10:29:44.386461   11632 out.go:303] Setting JSON to false
	I0601 10:29:44.387384   11632 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":739,"bootTime":1654078646,"procs":245,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1027-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0601 10:29:44.387454   11632 start.go:125] virtualization: kvm guest
	I0601 10:29:44.389815   11632 out.go:177] * [functional-20220601102657-7337] minikube v1.26.0-beta.1 on Ubuntu 20.04 (kvm/amd64)
	I0601 10:29:44.391258   11632 out.go:177]   - MINIKUBE_LOCATION=14079
	I0601 10:29:44.392770   11632 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0601 10:29:44.394394   11632 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 10:29:44.396294   11632 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube
	I0601 10:29:44.400947   11632 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0601 10:29:44.402725   11632 config.go:178] Loaded profile config "functional-20220601102657-7337": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0601 10:29:44.403251   11632 main.go:134] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0601 10:29:44.403332   11632 main.go:134] libmachine: Launching plugin server for driver kvm2
	I0601 10:29:44.422219   11632 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:35553
	I0601 10:29:44.422621   11632 main.go:134] libmachine: () Calling .GetVersion
	I0601 10:29:44.423184   11632 main.go:134] libmachine: Using API Version  1
	I0601 10:29:44.423213   11632 main.go:134] libmachine: () Calling .SetConfigRaw
	I0601 10:29:44.423649   11632 main.go:134] libmachine: () Calling .GetMachineName
	I0601 10:29:44.423833   11632 main.go:134] libmachine: (functional-20220601102657-7337) Calling .DriverName
	I0601 10:29:44.424054   11632 driver.go:358] Setting default libvirt URI to qemu:///system
	I0601 10:29:44.424391   11632 main.go:134] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0601 10:29:44.424433   11632 main.go:134] libmachine: Launching plugin server for driver kvm2
	I0601 10:29:44.441836   11632 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:42273
	I0601 10:29:44.442301   11632 main.go:134] libmachine: () Calling .GetVersion
	I0601 10:29:44.442845   11632 main.go:134] libmachine: Using API Version  1
	I0601 10:29:44.442865   11632 main.go:134] libmachine: () Calling .SetConfigRaw
	I0601 10:29:44.443209   11632 main.go:134] libmachine: () Calling .GetMachineName
	I0601 10:29:44.443425   11632 main.go:134] libmachine: (functional-20220601102657-7337) Calling .DriverName
	I0601 10:29:44.484612   11632 out.go:177] * Using the kvm2 driver based on existing profile
	I0601 10:29:44.486161   11632 start.go:284] selected driver: kvm2
	I0601 10:29:44.486189   11632 start.go:806] validating driver "kvm2" against &{Name:functional-20220601102657-7337 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/13807/minikube-v1.26.0-1653677468-13807-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:functional-20220601102657-7337 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.50.49 Port:8441 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-dri
ver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 10:29:44.486353   11632 start.go:817] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0601 10:29:44.489224   11632 out.go:177] 
	W0601 10:29:44.490999   11632 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0601 10:29:44.492493   11632 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:983: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20220601102657-7337 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1012: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20220601102657-7337 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1012: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-20220601102657-7337 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd: exit status 23 (172.194132ms)

                                                
                                                
-- stdout --
	* [functional-20220601102657-7337] minikube v1.26.0-beta.1 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=14079
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0601 10:29:31.867549   10787 out.go:296] Setting OutFile to fd 1 ...
	I0601 10:29:31.867690   10787 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 10:29:31.867702   10787 out.go:309] Setting ErrFile to fd 2...
	I0601 10:29:31.867709   10787 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 10:29:31.867891   10787 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/bin
	I0601 10:29:31.868183   10787 out.go:303] Setting JSON to false
	I0601 10:29:31.869151   10787 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":726,"bootTime":1654078646,"procs":224,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1027-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0601 10:29:31.869233   10787 start.go:125] virtualization: kvm guest
	I0601 10:29:31.872234   10787 out.go:177] * [functional-20220601102657-7337] minikube v1.26.0-beta.1 sur Ubuntu 20.04 (kvm/amd64)
	I0601 10:29:31.874644   10787 out.go:177]   - MINIKUBE_LOCATION=14079
	I0601 10:29:31.876222   10787 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0601 10:29:31.877800   10787 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 10:29:31.879746   10787 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube
	I0601 10:29:31.881436   10787 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0601 10:29:31.883208   10787 config.go:178] Loaded profile config "functional-20220601102657-7337": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0601 10:29:31.883750   10787 main.go:134] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0601 10:29:31.883814   10787 main.go:134] libmachine: Launching plugin server for driver kvm2
	I0601 10:29:31.902225   10787 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:37705
	I0601 10:29:31.902702   10787 main.go:134] libmachine: () Calling .GetVersion
	I0601 10:29:31.903319   10787 main.go:134] libmachine: Using API Version  1
	I0601 10:29:31.903345   10787 main.go:134] libmachine: () Calling .SetConfigRaw
	I0601 10:29:31.903750   10787 main.go:134] libmachine: () Calling .GetMachineName
	I0601 10:29:31.903988   10787 main.go:134] libmachine: (functional-20220601102657-7337) Calling .DriverName
	I0601 10:29:31.904190   10787 driver.go:358] Setting default libvirt URI to qemu:///system
	I0601 10:29:31.904618   10787 main.go:134] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0601 10:29:31.904658   10787 main.go:134] libmachine: Launching plugin server for driver kvm2
	I0601 10:29:31.919554   10787 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:36795
	I0601 10:29:31.919897   10787 main.go:134] libmachine: () Calling .GetVersion
	I0601 10:29:31.920454   10787 main.go:134] libmachine: Using API Version  1
	I0601 10:29:31.920472   10787 main.go:134] libmachine: () Calling .SetConfigRaw
	I0601 10:29:31.920825   10787 main.go:134] libmachine: () Calling .GetMachineName
	I0601 10:29:31.921027   10787 main.go:134] libmachine: (functional-20220601102657-7337) Calling .DriverName
	I0601 10:29:31.953382   10787 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0601 10:29:31.954696   10787 start.go:284] selected driver: kvm2
	I0601 10:29:31.954721   10787 start.go:806] validating driver "kvm2" against &{Name:functional-20220601102657-7337 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/13807/minikube-v1.26.0-1653677468-13807-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.31-1653677545-13807@sha256:312115a5663b1250effab8ed8ada9435fca80af41962223c98bf66f86b32c52a Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.23.6 ClusterName:functional-20220601102657-7337 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.50.49 Port:8441 KubernetesVersion:v1.23.6 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-dri
ver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0601 10:29:31.954902   10787 start.go:817] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0601 10:29:31.957749   10787 out.go:177] 
	W0601 10:29:31.959161   10787 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0601 10:29:31.960413   10787 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:846: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220601102657-7337 status

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:852: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220601102657-7337 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:864: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220601102657-7337 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.91s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd (12.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd
=== PAUSE TestFunctional/parallel/ServiceCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1432: (dbg) Run:  kubectl --context functional-20220601102657-7337 create deployment hello-node --image=k8s.gcr.io/echoserver:1.8

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1438: (dbg) Run:  kubectl --context functional-20220601102657-7337 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1443: (dbg) TestFunctional/parallel/ServiceCmd: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:342: "hello-node-54fbb85-qlq9b" [38a36d59-0c9d-499c-8d87-a50062496950] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
helpers_test.go:342: "hello-node-54fbb85-qlq9b" [38a36d59-0c9d-499c-8d87-a50062496950] Running

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1443: (dbg) TestFunctional/parallel/ServiceCmd: app=hello-node healthy within 11.012912137s
functional_test.go:1448: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220601102657-7337 service list

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1462: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220601102657-7337 service --namespace=default --https --url hello-node

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1475: found endpoint: https://192.168.50.49:32623
functional_test.go:1490: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220601102657-7337 service hello-node --url --format={{.IP}}

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1504: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220601102657-7337 service hello-node --url
functional_test.go:1510: found endpoint for hello-node: http://192.168.50.49:32623
--- PASS: TestFunctional/parallel/ServiceCmd (12.70s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1558: (dbg) Run:  kubectl --context functional-20220601102657-7337 create deployment hello-node-connect --image=k8s.gcr.io/echoserver:1.8

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1564: (dbg) Run:  kubectl --context functional-20220601102657-7337 expose deployment hello-node-connect --type=NodePort --port=8080

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1569: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:342: "hello-node-connect-74cf8bc446-p4zgf" [cb9a030d-dbc8-4dcf-8563-b0a0aba4146f] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
helpers_test.go:342: "hello-node-connect-74cf8bc446-p4zgf" [cb9a030d-dbc8-4dcf-8563-b0a0aba4146f] Running

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1569: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.018252499s
functional_test.go:1578: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220601102657-7337 service hello-node-connect --url

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1584: found endpoint for hello-node-connect: http://192.168.50.49:32166
functional_test.go:1604: http://192.168.50.49:32166: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-74cf8bc446-p4zgf

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.50.49:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.50.49:32166
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (10.53s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1619: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220601102657-7337 addons list

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1631: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220601102657-7337 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (45.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:342: "storage-provisioner" [6e32aa42-6764-4a98-84ff-0072530e1b96] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.019427732s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-20220601102657-7337 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-20220601102657-7337 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-20220601102657-7337 get pvc myclaim -o=json

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-20220601102657-7337 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-20220601102657-7337 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:342: "sp-pod" [5e657b39-47e6-4bf9-a4b8-6496316bc4ab] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:342: "sp-pod" [5e657b39-47e6-4bf9-a4b8-6496316bc4ab] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:342: "sp-pod" [5e657b39-47e6-4bf9-a4b8-6496316bc4ab] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 15.012286974s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-20220601102657-7337 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-20220601102657-7337 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-20220601102657-7337 delete -f testdata/storage-provisioner/pod.yaml: (1.947079754s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-20220601102657-7337 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:342: "sp-pod" [2d7f0bc3-4a80-463f-9523-13858a4f6487] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:342: "sp-pod" [2d7f0bc3-4a80-463f-9523-13858a4f6487] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
2022/06/01 10:29:58 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:342: "sp-pod" [2d7f0bc3-4a80-463f-9523-13858a4f6487] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 21.019109383s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-20220601102657-7337 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (45.99s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220601102657-7337 ssh "echo hello"

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1671: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220601102657-7337 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220601102657-7337 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220601102657-7337 ssh -n functional-20220601102657-7337 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220601102657-7337 cp functional-20220601102657-7337:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd909497328/001/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220601102657-7337 ssh -n functional-20220601102657-7337 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.89s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (35.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1719: (dbg) Run:  kubectl --context functional-20220601102657-7337 replace --force -f testdata/mysql.yaml
functional_test.go:1725: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:342: "mysql-b87c45988-4cs7r" [c8662a54-f325-4038-b13a-541f571794cf] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:342: "mysql-b87c45988-4cs7r" [c8662a54-f325-4038-b13a-541f571794cf] Running

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1725: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 29.0122933s
functional_test.go:1733: (dbg) Run:  kubectl --context functional-20220601102657-7337 exec mysql-b87c45988-4cs7r -- mysql -ppassword -e "show databases;"
functional_test.go:1733: (dbg) Non-zero exit: kubectl --context functional-20220601102657-7337 exec mysql-b87c45988-4cs7r -- mysql -ppassword -e "show databases;": exit status 1 (150.003936ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1733: (dbg) Run:  kubectl --context functional-20220601102657-7337 exec mysql-b87c45988-4cs7r -- mysql -ppassword -e "show databases;"
functional_test.go:1733: (dbg) Non-zero exit: kubectl --context functional-20220601102657-7337 exec mysql-b87c45988-4cs7r -- mysql -ppassword -e "show databases;": exit status 1 (120.511513ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1733: (dbg) Run:  kubectl --context functional-20220601102657-7337 exec mysql-b87c45988-4cs7r -- mysql -ppassword -e "show databases;"
functional_test.go:1733: (dbg) Non-zero exit: kubectl --context functional-20220601102657-7337 exec mysql-b87c45988-4cs7r -- mysql -ppassword -e "show databases;": exit status 1 (134.515071ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1733: (dbg) Run:  kubectl --context functional-20220601102657-7337 exec mysql-b87c45988-4cs7r -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (35.34s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1855: Checking for existence of /etc/test/nested/copy/7337/hosts within VM
functional_test.go:1857: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220601102657-7337 ssh "sudo cat /etc/test/nested/copy/7337/hosts"
functional_test.go:1862: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1898: Checking for existence of /etc/ssl/certs/7337.pem within VM
functional_test.go:1899: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220601102657-7337 ssh "sudo cat /etc/ssl/certs/7337.pem"
functional_test.go:1898: Checking for existence of /usr/share/ca-certificates/7337.pem within VM
functional_test.go:1899: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220601102657-7337 ssh "sudo cat /usr/share/ca-certificates/7337.pem"
functional_test.go:1898: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1899: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220601102657-7337 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1925: Checking for existence of /etc/ssl/certs/73372.pem within VM
functional_test.go:1926: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220601102657-7337 ssh "sudo cat /etc/ssl/certs/73372.pem"
functional_test.go:1925: Checking for existence of /usr/share/ca-certificates/73372.pem within VM
functional_test.go:1926: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220601102657-7337 ssh "sudo cat /usr/share/ca-certificates/73372.pem"
functional_test.go:1925: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1926: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220601102657-7337 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.57s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:214: (dbg) Run:  kubectl --context functional-20220601102657-7337 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1953: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220601102657-7337 ssh "sudo systemctl is-active docker"

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1953: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220601102657-7337 ssh "sudo systemctl is-active docker": exit status 1 (225.249214ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:1953: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220601102657-7337 ssh "sudo systemctl is-active crio"

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1953: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220601102657-7337 ssh "sudo systemctl is-active crio": exit status 1 (244.261643ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 profile lis

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1305: (dbg) Run:  out/minikube-linux-amd64 profile list

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: Took "274.838593ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1319: (dbg) Run:  out/minikube-linux-amd64 profile list -l

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1324: Took "73.264526ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (10.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:66: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-20220601102657-7337 /tmp/TestFunctionalparallelMountCmdany-port158867396/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:100: wrote "test-1654079371822016406" to /tmp/TestFunctionalparallelMountCmdany-port158867396/001/created-by-test
functional_test_mount_test.go:100: wrote "test-1654079371822016406" to /tmp/TestFunctionalparallelMountCmdany-port158867396/001/created-by-test-removed-by-pod
functional_test_mount_test.go:100: wrote "test-1654079371822016406" to /tmp/TestFunctionalparallelMountCmdany-port158867396/001/test-1654079371822016406
functional_test_mount_test.go:108: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220601102657-7337 ssh "findmnt -T /mount-9p | grep 9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:108: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220601102657-7337 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (244.570003ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:108: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220601102657-7337 ssh "findmnt -T /mount-9p | grep 9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:122: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220601102657-7337 ssh -- ls -la /mount-9p

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:126: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jun  1 10:29 created-by-test
-rw-r--r-- 1 docker docker 24 Jun  1 10:29 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jun  1 10:29 test-1654079371822016406
functional_test_mount_test.go:130: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220601102657-7337 ssh cat /mount-9p/test-1654079371822016406

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:141: (dbg) Run:  kubectl --context functional-20220601102657-7337 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:146: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:342: "busybox-mount" [6abc562a-50ef-4516-81e5-94c8a5176b6a] Pending
helpers_test.go:342: "busybox-mount" [6abc562a-50ef-4516-81e5-94c8a5176b6a] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:342: "busybox-mount" [6abc562a-50ef-4516-81e5-94c8a5176b6a] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:342: "busybox-mount" [6abc562a-50ef-4516-81e5-94c8a5176b6a] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:146: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 8.01587636s
functional_test_mount_test.go:162: (dbg) Run:  kubectl --context functional-20220601102657-7337 logs busybox-mount
functional_test_mount_test.go:174: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220601102657-7337 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:174: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220601102657-7337 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:83: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220601102657-7337 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:87: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-20220601102657-7337 /tmp/TestFunctionalparallelMountCmdany-port158867396/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (10.69s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1356: (dbg) Run:  out/minikube-linux-amd64 profile list -o json

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: Took "243.611538ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1369: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1374: Took "67.078328ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:206: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-20220601102657-7337 /tmp/TestFunctionalparallelMountCmdspecific-port1349037259/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:236: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220601102657-7337 ssh "findmnt -T /mount-9p | grep 9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:236: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220601102657-7337 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (254.522678ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:236: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220601102657-7337 ssh "findmnt -T /mount-9p | grep 9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:250: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220601102657-7337 ssh -- ls -la /mount-9p

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:254: guest mount directory contents
total 0
functional_test_mount_test.go:256: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-20220601102657-7337 /tmp/TestFunctionalparallelMountCmdspecific-port1349037259/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:257: reading mount text
functional_test_mount_test.go:271: done reading mount text
functional_test_mount_test.go:223: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220601102657-7337 ssh "sudo umount -f /mount-9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:223: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220601102657-7337 ssh "sudo umount -f /mount-9p": exit status 1 (248.712962ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:225: "out/minikube-linux-amd64 -p functional-20220601102657-7337 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:227: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-20220601102657-7337 /tmp/TestFunctionalparallelMountCmdspecific-port1349037259/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.76s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2182: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220601102657-7337 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2196: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220601102657-7337 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220601102657-7337 image ls --format short

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Stdout: out/minikube-linux-amd64 -p functional-20220601102657-7337 image ls --format short:
k8s.gcr.io/pause:latest
k8s.gcr.io/pause:3.6
k8s.gcr.io/pause:3.3
k8s.gcr.io/pause:3.1
k8s.gcr.io/kube-scheduler:v1.23.6
k8s.gcr.io/kube-proxy:v1.23.6
k8s.gcr.io/kube-controller-manager:v1.23.6
k8s.gcr.io/kube-apiserver:v1.23.6
k8s.gcr.io/etcd:3.5.1-0
k8s.gcr.io/echoserver:1.8
k8s.gcr.io/coredns/coredns:v1.8.6
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-20220601102657-7337
docker.io/library/nginx:latest
docker.io/library/minikube-local-cache-test:functional-20220601102657-7337
docker.io/kindest/kindnetd:v20210326-1e038dc5
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220601102657-7337 image ls --format table
functional_test.go:261: (dbg) Stdout: out/minikube-linux-amd64 -p functional-20220601102657-7337 image ls --format table:
|---------------------------------------------|--------------------------------|---------------|--------|
|                    Image                    |              Tag               |   Image ID    |  Size  |
|---------------------------------------------|--------------------------------|---------------|--------|
| k8s.gcr.io/pause                            | 3.1                            | sha256:da86e6 | 353kB  |
| docker.io/library/minikube-local-cache-test | functional-20220601102657-7337 | sha256:2e5a8f | 1.74kB |
| docker.io/library/nginx                     | latest                         | sha256:0e901e | 56.7MB |
| k8s.gcr.io/echoserver                       | 1.8                            | sha256:82e4c8 | 46.2MB |
| k8s.gcr.io/etcd                             | 3.5.1-0                        | sha256:25f8c7 | 98.9MB |
| k8s.gcr.io/kube-apiserver                   | v1.23.6                        | sha256:8fa62c | 32.6MB |
| k8s.gcr.io/kube-proxy                       | v1.23.6                        | sha256:4c0375 | 39.3MB |
| gcr.io/google-containers/addon-resizer      | functional-20220601102657-7337 | sha256:ffd4cf | 10.8MB |
| k8s.gcr.io/coredns/coredns                  | v1.8.6                         | sha256:a4ca41 | 13.6MB |
| k8s.gcr.io/kube-controller-manager          | v1.23.6                        | sha256:df7b72 | 30.2MB |
| docker.io/kindest/kindnetd                  | v20210326-1e038dc5             | sha256:6de166 | 54MB   |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc                   | sha256:56cc51 | 2.4MB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                             | sha256:6e38f4 | 9.06MB |
| k8s.gcr.io/kube-scheduler                   | v1.23.6                        | sha256:595f32 | 15.1MB |
| k8s.gcr.io/pause                            | 3.3                            | sha256:0184c1 | 298kB  |
| k8s.gcr.io/pause                            | latest                         | sha256:350b16 | 72.3kB |
| k8s.gcr.io/pause                            | 3.6                            | sha256:6270bb | 302kB  |
|---------------------------------------------|--------------------------------|---------------|--------|
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220601102657-7337 image ls --format json

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Stdout: out/minikube-linux-amd64 -p functional-20220601102657-7337 image ls --format json:
[{"id":"sha256:7801cfc6d5c072eb114355d369c830641064a246b5a774bcd668fac75ec728e9","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:36d5b3f60e1a144cc5ada820910535074bdf5cf73fb70d1ff1681537eef4e172"],"repoTags":[],"size":"15029138"},{"id":"sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["k8s.gcr.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["k8s.gcr.io/echoserver:1.8"],"size":"46237695"},{"id":"sha256:8fa62c12256df9d9d0c3f1cf90856e27d90f209f42271c2f19326a705342c3b6","repoDigests":["k8s.gcr.io/kube-apiserver@sha256:0cd8c0bed8d89d914ee5df41e8a40112fb0a28804429c7964296abedc94da9f1"],"repoTags":["k8s.gcr.io/kube-apiserver:v1.23.6"],"size":"32601483"},{"id":"sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.1"],"size":"353405"},{"id":"sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags"
:["k8s.gcr.io/pause:3.3"],"size":"297686"},{"id":"sha256:ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-20220601102657-7337"],"size":"10823156"},{"id":"sha256:df7b72818ad2e4f1f204c7ffb51239de67f49c6b22671c70354ee5d65ac37657","repoDigests":["k8s.gcr.io/kube-controller-manager@sha256:df94796b78d2285ffe6b231c2b39d25034dde8814de2f75d953a827e77fe6adf"],"repoTags":["k8s.gcr.io/kube-controller-manager:v1.23.6"],"size":"30173645"},{"id":"sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee","repoDigests":["k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db"],"repoTags":["k8s.gcr.io/pause:3.6"],"size":"301773"},{"id":"sha256:6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb","repoDigests":["docker.io/kindest/kindnetd@sha256:838bc1706e38391aefaa31fd52619fe8e57ad3dfb0d0ff414d902367fcc24c3c"],"repoTags":["docker.io/kindest/kindnetd:v20210326-1e038dc5"],
"size":"53960776"},{"id":"sha256:7fff914c4a615552dde44bde1183cdaf1656495d54327823c37e897e6c999fe8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2"],"repoTags":[],"size":"73695017"},{"id":"sha256:0e901e68141fd02f237cf63eb842529f8a9500636a9419e3cf4fb986b8fe3d5d","repoDigests":["docker.io/library/nginx@sha256:2bcabc23b45489fb0885d69a06ba1d648aeda973fae7bb981bafbb884165e514"],"repoTags":["docker.io/library/nginx:latest"],"size":"56746739"},{"id":"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"2395207"},{"id":"sha256:4c037545240644e87d79f6b4071331f9adea6176339c98e529b4af8af00d4e47","repoDigests":["k8s.gcr.io/kube-proxy@sha256:cc007fb495f362f18c74e6f5552060c6785ca2b802a5067251de55c7cc880741"],"repoTags":["k8s.gcr.io/kube-proxy:v1.23.6"],"
size":"39277919"},{"id":"sha256:595f327f224a42213913a39d224c8aceb96c81ad3909ae13f6045f570aafe8f0","repoDigests":["k8s.gcr.io/kube-scheduler@sha256:02b4e994459efa49c3e2392733e269893e23d4ac46e92e94107652963caae78b"],"repoTags":["k8s.gcr.io/kube-scheduler:v1.23.6"],"size":"15134087"},{"id":"sha256:2e5a8ffd1d926bfaa385f60b99394686a9e197d6e565b28570dfa5e466e7cdc1","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-20220601102657-7337"],"size":"1738"},{"id":"sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["k8s.gcr.io/pause:latest"],"size":"72306"},{"id":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"9058936"},{"id":"sha256:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03","repoDigests":["k8s.gc
r.io/coredns/coredns@sha256:5b6ec0d6de9baaf3e92d0f66cd96a25b9edbce8716f5f15dcd1a616b3abd590e"],"repoTags":["k8s.gcr.io/coredns/coredns:v1.8.6"],"size":"13585107"},{"id":"sha256:25f8c7f3da61c2a810effe5fa779cf80ca171afb0adf94c7cb51eb9a8546629d","repoDigests":["k8s.gcr.io/etcd@sha256:64b9ea357325d5db9f8a723dcf503b5a449177b17ac87d69481e126bb724c263"],"repoTags":["k8s.gcr.io/etcd:3.5.1-0"],"size":"98888614"}]
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220601102657-7337 image ls --format yaml

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Stdout: out/minikube-linux-amd64 -p functional-20220601102657-7337 image ls --format yaml:
- id: sha256:6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb
repoDigests:
- docker.io/kindest/kindnetd@sha256:838bc1706e38391aefaa31fd52619fe8e57ad3dfb0d0ff414d902367fcc24c3c
repoTags:
- docker.io/kindest/kindnetd:v20210326-1e038dc5
size: "53960776"
- id: sha256:2e5a8ffd1d926bfaa385f60b99394686a9e197d6e565b28570dfa5e466e7cdc1
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-20220601102657-7337
size: "1738"
- id: sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.1
size: "353405"
- id: sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee
repoDigests:
- k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db
repoTags:
- k8s.gcr.io/pause:3.6
size: "301773"
- id: sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- k8s.gcr.io/pause:latest
size: "72306"
- id: sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- k8s.gcr.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- k8s.gcr.io/echoserver:1.8
size: "46237695"
- id: sha256:df7b72818ad2e4f1f204c7ffb51239de67f49c6b22671c70354ee5d65ac37657
repoDigests:
- k8s.gcr.io/kube-controller-manager@sha256:df94796b78d2285ffe6b231c2b39d25034dde8814de2f75d953a827e77fe6adf
repoTags:
- k8s.gcr.io/kube-controller-manager:v1.23.6
size: "30173645"
- id: sha256:595f327f224a42213913a39d224c8aceb96c81ad3909ae13f6045f570aafe8f0
repoDigests:
- k8s.gcr.io/kube-scheduler@sha256:02b4e994459efa49c3e2392733e269893e23d4ac46e92e94107652963caae78b
repoTags:
- k8s.gcr.io/kube-scheduler:v1.23.6
size: "15134087"
- id: sha256:7fff914c4a615552dde44bde1183cdaf1656495d54327823c37e897e6c999fe8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2
repoTags: []
size: "73695017"
- id: sha256:7801cfc6d5c072eb114355d369c830641064a246b5a774bcd668fac75ec728e9
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:36d5b3f60e1a144cc5ada820910535074bdf5cf73fb70d1ff1681537eef4e172
repoTags: []
size: "15029138"
- id: sha256:ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-20220601102657-7337
size: "10823156"
- id: sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "9058936"
- id: sha256:25f8c7f3da61c2a810effe5fa779cf80ca171afb0adf94c7cb51eb9a8546629d
repoDigests:
- k8s.gcr.io/etcd@sha256:64b9ea357325d5db9f8a723dcf503b5a449177b17ac87d69481e126bb724c263
repoTags:
- k8s.gcr.io/etcd:3.5.1-0
size: "98888614"
- id: sha256:8fa62c12256df9d9d0c3f1cf90856e27d90f209f42271c2f19326a705342c3b6
repoDigests:
- k8s.gcr.io/kube-apiserver@sha256:0cd8c0bed8d89d914ee5df41e8a40112fb0a28804429c7964296abedc94da9f1
repoTags:
- k8s.gcr.io/kube-apiserver:v1.23.6
size: "32601483"
- id: sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.3
size: "297686"
- id: sha256:0e901e68141fd02f237cf63eb842529f8a9500636a9419e3cf4fb986b8fe3d5d
repoDigests:
- docker.io/library/nginx@sha256:2bcabc23b45489fb0885d69a06ba1d648aeda973fae7bb981bafbb884165e514
repoTags:
- docker.io/library/nginx:latest
size: "56746739"
- id: sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "2395207"
- id: sha256:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03
repoDigests:
- k8s.gcr.io/coredns/coredns@sha256:5b6ec0d6de9baaf3e92d0f66cd96a25b9edbce8716f5f15dcd1a616b3abd590e
repoTags:
- k8s.gcr.io/coredns/coredns:v1.8.6
size: "13585107"
- id: sha256:4c037545240644e87d79f6b4071331f9adea6176339c98e529b4af8af00d4e47
repoDigests:
- k8s.gcr.io/kube-proxy@sha256:cc007fb495f362f18c74e6f5552060c6785ca2b802a5067251de55c7cc880741
repoTags:
- k8s.gcr.io/kube-proxy:v1.23.6
size: "39277919"

                                                
                                                
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:303: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220601102657-7337 ssh pgrep buildkitd

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:303: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220601102657-7337 ssh pgrep buildkitd: exit status 1 (248.794555ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:310: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220601102657-7337 image build -t localhost/my-image:functional-20220601102657-7337 testdata/build

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:310: (dbg) Done: out/minikube-linux-amd64 -p functional-20220601102657-7337 image build -t localhost/my-image:functional-20220601102657-7337 testdata/build: (4.199636945s)
functional_test.go:318: (dbg) Stderr: out/minikube-linux-amd64 -p functional-20220601102657-7337 image build -t localhost/my-image:functional-20220601102657-7337 testdata/build:
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.1s

                                                
                                                
#2 [internal] load .dockerignore
#2 transferring context: 2B done
#2 DONE 0.1s

                                                
                                                
#3 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#3 DONE 1.4s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.2s
#5 DONE 0.4s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.1s done
#5 DONE 0.5s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.6s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers
#8 exporting layers 0.2s done
#8 exporting manifest sha256:7d41ed156209ec8b8c9e777219442994908391675efede6c393b36603bd8873b 0.0s done
#8 exporting config sha256:9dec86e96738195e47ac7732f1a52828d69367c2dacaae0716229ae894d28a5a
#8 exporting config sha256:9dec86e96738195e47ac7732f1a52828d69367c2dacaae0716229ae894d28a5a 0.0s done
#8 naming to localhost/my-image:functional-20220601102657-7337 done
#8 DONE 0.3s
functional_test.go:443: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220601102657-7337 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.68s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:337: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/Setup
functional_test.go:337: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.500316291s)
functional_test.go:342: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-20220601102657-7337
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.54s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:350: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220601102657-7337 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220601102657-7337

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:350: (dbg) Done: out/minikube-linux-amd64 -p functional-20220601102657-7337 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220601102657-7337: (4.705952274s)
functional_test.go:443: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220601102657-7337 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.98s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2045: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220601102657-7337 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2045: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220601102657-7337 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2045: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220601102657-7337 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (6.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:360: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220601102657-7337 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220601102657-7337

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:360: (dbg) Done: out/minikube-linux-amd64 -p functional-20220601102657-7337 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220601102657-7337: (6.720510638s)
functional_test.go:443: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220601102657-7337 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (6.96s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:230: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:235: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-20220601102657-7337
functional_test.go:240: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220601102657-7337 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220601102657-7337

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:240: (dbg) Done: out/minikube-linux-amd64 -p functional-20220601102657-7337 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220601102657-7337: (5.057423706s)
functional_test.go:443: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220601102657-7337 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.74s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:375: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220601102657-7337 image save gcr.io/google-containers/addon-resizer:functional-20220601102657-7337 /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar
functional_test.go:375: (dbg) Done: out/minikube-linux-amd64 -p functional-20220601102657-7337 image save gcr.io/google-containers/addon-resizer:functional-20220601102657-7337 /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar: (1.363141074s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:387: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220601102657-7337 image rm gcr.io/google-containers/addon-resizer:functional-20220601102657-7337
functional_test.go:443: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220601102657-7337 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:404: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220601102657-7337 image load /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar
functional_test.go:404: (dbg) Done: out/minikube-linux-amd64 -p functional-20220601102657-7337 image load /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar: (1.538636559s)
functional_test.go:443: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220601102657-7337 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.76s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:414: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-20220601102657-7337
functional_test.go:419: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220601102657-7337 image save --daemon gcr.io/google-containers/addon-resizer:functional-20220601102657-7337
functional_test.go:419: (dbg) Done: out/minikube-linux-amd64 -p functional-20220601102657-7337 image save --daemon gcr.io/google-containers/addon-resizer:functional-20220601102657-7337: (1.688759247s)
functional_test.go:424: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-20220601102657-7337
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.75s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.09s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:185: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:185: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-20220601102657-7337
--- PASS: TestFunctional/delete_addon-resizer_images (0.09s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:193: (dbg) Run:  docker rmi -f localhost/my-image:functional-20220601102657-7337
--- PASS: TestFunctional/delete_my-image_image (0.03s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:201: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-20220601102657-7337
--- PASS: TestFunctional/delete_minikube_cached_images (0.03s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (80.92s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-20220601103024-7337 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd
E0601 10:30:30.108664    7337 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/addons-20220601102016-7337/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-20220601103024-7337 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (1m20.917752719s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (80.92s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (15.21s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-20220601103024-7337 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-20220601103024-7337 addons enable ingress --alsologtostderr -v=5: (15.212299452s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (15.21s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.38s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-20220601103024-7337 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.38s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (37.71s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:162: (dbg) Run:  kubectl --context ingress-addon-legacy-20220601103024-7337 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:162: (dbg) Done: kubectl --context ingress-addon-legacy-20220601103024-7337 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (13.138025086s)
addons_test.go:182: (dbg) Run:  kubectl --context ingress-addon-legacy-20220601103024-7337 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:195: (dbg) Run:  kubectl --context ingress-addon-legacy-20220601103024-7337 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:200: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:342: "nginx" [8412d2ad-792b-4019-9752-3de867becce5] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:342: "nginx" [8412d2ad-792b-4019-9752-3de867becce5] Running
addons_test.go:200: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 9.011616739s
addons_test.go:212: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-20220601103024-7337 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:236: (dbg) Run:  kubectl --context ingress-addon-legacy-20220601103024-7337 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:241: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-20220601103024-7337 ip
addons_test.go:247: (dbg) Run:  nslookup hello-john.test 192.168.50.190
addons_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-20220601103024-7337 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-20220601103024-7337 addons disable ingress-dns --alsologtostderr -v=1: (7.087325167s)
addons_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-20220601103024-7337 addons disable ingress --alsologtostderr -v=1
addons_test.go:261: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-20220601103024-7337 addons disable ingress --alsologtostderr -v=1: (7.416972783s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddons (37.71s)

                                                
                                    
x
+
TestJSONOutput/start/Command (114.22s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-20220601103239-7337 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=containerd
E0601 10:32:46.265325    7337 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/addons-20220601102016-7337/client.crt: no such file or directory
E0601 10:33:13.948911    7337 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/addons-20220601102016-7337/client.crt: no such file or directory
E0601 10:34:31.580472    7337 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/functional-20220601102657-7337/client.crt: no such file or directory
E0601 10:34:31.585743    7337 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/functional-20220601102657-7337/client.crt: no such file or directory
E0601 10:34:31.595967    7337 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/functional-20220601102657-7337/client.crt: no such file or directory
E0601 10:34:31.616210    7337 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/functional-20220601102657-7337/client.crt: no such file or directory
E0601 10:34:31.656453    7337 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/functional-20220601102657-7337/client.crt: no such file or directory
E0601 10:34:31.736734    7337 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/functional-20220601102657-7337/client.crt: no such file or directory
E0601 10:34:31.897108    7337 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/functional-20220601102657-7337/client.crt: no such file or directory
E0601 10:34:32.217695    7337 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/functional-20220601102657-7337/client.crt: no such file or directory
E0601 10:34:32.858467    7337 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/functional-20220601102657-7337/client.crt: no such file or directory
E0601 10:34:34.138935    7337 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/functional-20220601102657-7337/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-20220601103239-7337 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=containerd: (1m54.220872946s)
--- PASS: TestJSONOutput/start/Command (114.22s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.62s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-20220601103239-7337 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.62s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.6s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-20220601103239-7337 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.60s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.1s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-20220601103239-7337 --output=json --user=testUser
E0601 10:34:36.699702    7337 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/functional-20220601102657-7337/client.crt: no such file or directory
E0601 10:34:41.820534    7337 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/functional-20220601102657-7337/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-20220601103239-7337 --output=json --user=testUser: (7.104305051s)
--- PASS: TestJSONOutput/stop/Command (7.10s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.28s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:149: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-20220601103443-7337 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-20220601103443-7337 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (81.416483ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"9a2e4be7-4925-4b51-baa6-7b4ce9c9b9cf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-20220601103443-7337] minikube v1.26.0-beta.1 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"ab7d6bce-1025-4ab9-ab62-ceaaa763bdba","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=14079"}}
	{"specversion":"1.0","id":"136071a6-723e-4253-92c7-cb3c567df1cc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"9c9b8aa8-1226-438d-a3c1-d6fbc922cef5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig"}}
	{"specversion":"1.0","id":"5e62c9de-4a26-44e9-96ba-7a213d35481b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube"}}
	{"specversion":"1.0","id":"1e8c1469-4c14-4cb3-8495-80440250cd3d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"8ef6d584-a1d6-40a0-a257-53aeb4d85460","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-20220601103443-7337" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-20220601103443-7337
--- PASS: TestErrorJSONOutput (0.28s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (124.68s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-20220601103443-7337 --driver=kvm2  --container-runtime=containerd
E0601 10:34:52.060962    7337 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/functional-20220601102657-7337/client.crt: no such file or directory
E0601 10:35:12.541202    7337 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/functional-20220601102657-7337/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-20220601103443-7337 --driver=kvm2  --container-runtime=containerd: (57.890811242s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-20220601103443-7337 --driver=kvm2  --container-runtime=containerd
E0601 10:35:53.501818    7337 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/functional-20220601102657-7337/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-20220601103443-7337 --driver=kvm2  --container-runtime=containerd: (1m2.643530208s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-20220601103443-7337
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-20220601103443-7337
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-20220601103443-7337" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-20220601103443-7337
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-20220601103443-7337: (1.26175654s)
helpers_test.go:175: Cleaning up "first-20220601103443-7337" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-20220601103443-7337
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-20220601103443-7337: (1.069832243s)
--- PASS: TestMinikubeProfile (124.68s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (26.96s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-20220601103648-7337 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd
E0601 10:37:00.914568    7337 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/ingress-addon-legacy-20220601103024-7337/client.crt: no such file or directory
E0601 10:37:00.919830    7337 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/ingress-addon-legacy-20220601103024-7337/client.crt: no such file or directory
E0601 10:37:00.930114    7337 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/ingress-addon-legacy-20220601103024-7337/client.crt: no such file or directory
E0601 10:37:00.950385    7337 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/ingress-addon-legacy-20220601103024-7337/client.crt: no such file or directory
E0601 10:37:00.990617    7337 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/ingress-addon-legacy-20220601103024-7337/client.crt: no such file or directory
E0601 10:37:01.070920    7337 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/ingress-addon-legacy-20220601103024-7337/client.crt: no such file or directory
E0601 10:37:01.231455    7337 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/ingress-addon-legacy-20220601103024-7337/client.crt: no such file or directory
E0601 10:37:01.552037    7337 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/ingress-addon-legacy-20220601103024-7337/client.crt: no such file or directory
E0601 10:37:02.193060    7337 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/ingress-addon-legacy-20220601103024-7337/client.crt: no such file or directory
E0601 10:37:03.473534    7337 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/ingress-addon-legacy-20220601103024-7337/client.crt: no such file or directory
E0601 10:37:06.035285    7337 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/ingress-addon-legacy-20220601103024-7337/client.crt: no such file or directory
E0601 10:37:11.155868    7337 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/ingress-addon-legacy-20220601103024-7337/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-20220601103648-7337 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (25.959528933s)
--- PASS: TestMountStart/serial/StartWithMountFirst (26.96s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-20220601103648-7337 ssh -- ls /minikube-host
E0601 10:37:15.422043    7337 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/functional-20220601102657-7337/client.crt: no such file or directory
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-20220601103648-7337 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.40s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (26.92s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-20220601103648-7337 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd
E0601 10:37:21.396443    7337 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/ingress-addon-legacy-20220601103024-7337/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-20220601103648-7337 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (25.916226832s)
E0601 10:37:41.877204    7337 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/ingress-addon-legacy-20220601103024-7337/client.crt: no such file or directory
--- PASS: TestMountStart/serial/StartWithMountSecond (26.92s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.41s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-20220601103648-7337 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-20220601103648-7337 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.41s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.16s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-20220601103648-7337 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-20220601103648-7337 --alsologtostderr -v=5: (1.164714917s)
--- PASS: TestMountStart/serial/DeleteFirst (1.16s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.41s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-20220601103648-7337 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-20220601103648-7337 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.41s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.2s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-20220601103648-7337
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-20220601103648-7337: (1.198827649s)
--- PASS: TestMountStart/serial/Stop (1.20s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (22.05s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-20220601103648-7337
E0601 10:37:46.264755    7337 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/addons-20220601102016-7337/client.crt: no such file or directory
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-20220601103648-7337: (21.046752574s)
--- PASS: TestMountStart/serial/RestartStopped (22.05s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-20220601103648-7337 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-20220601103648-7337 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.39s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (146.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20220601103809-7337 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E0601 10:38:22.838028    7337 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/ingress-addon-legacy-20220601103024-7337/client.crt: no such file or directory
E0601 10:39:31.580585    7337 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/functional-20220601102657-7337/client.crt: no such file or directory
E0601 10:39:44.758977    7337 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/ingress-addon-legacy-20220601103024-7337/client.crt: no such file or directory
E0601 10:39:59.262537    7337 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/functional-20220601102657-7337/client.crt: no such file or directory
multinode_test.go:83: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20220601103809-7337 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (2m26.433184829s)
multinode_test.go:89: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220601103809-7337 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (146.84s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220601103809-7337 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220601103809-7337 -- rollout status deployment/busybox
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-20220601103809-7337 -- rollout status deployment/busybox: (3.43659651s)
multinode_test.go:490: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220601103809-7337 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:502: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220601103809-7337 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:510: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220601103809-7337 -- exec busybox-7978565885-x89sg -- nslookup kubernetes.io
multinode_test.go:510: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220601103809-7337 -- exec busybox-7978565885-zf6tf -- nslookup kubernetes.io
multinode_test.go:520: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220601103809-7337 -- exec busybox-7978565885-x89sg -- nslookup kubernetes.default
multinode_test.go:520: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220601103809-7337 -- exec busybox-7978565885-zf6tf -- nslookup kubernetes.default
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220601103809-7337 -- exec busybox-7978565885-x89sg -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220601103809-7337 -- exec busybox-7978565885-zf6tf -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.16s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:538: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220601103809-7337 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220601103809-7337 -- exec busybox-7978565885-x89sg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220601103809-7337 -- exec busybox-7978565885-x89sg -- sh -c "ping -c 1 192.168.50.1"
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220601103809-7337 -- exec busybox-7978565885-zf6tf -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220601103809-7337 -- exec busybox-7978565885-zf6tf -- sh -c "ping -c 1 192.168.50.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.87s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (59.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:108: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-20220601103809-7337 -v 3 --alsologtostderr
multinode_test.go:108: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-20220601103809-7337 -v 3 --alsologtostderr: (59.369017988s)
multinode_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220601103809-7337 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (59.95s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:130: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.23s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220601103809-7337 status --output json --alsologtostderr
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220601103809-7337 cp testdata/cp-test.txt multinode-20220601103809-7337:/home/docker/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220601103809-7337 ssh -n multinode-20220601103809-7337 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220601103809-7337 cp multinode-20220601103809-7337:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile567149612/001/cp-test_multinode-20220601103809-7337.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220601103809-7337 ssh -n multinode-20220601103809-7337 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220601103809-7337 cp multinode-20220601103809-7337:/home/docker/cp-test.txt multinode-20220601103809-7337-m02:/home/docker/cp-test_multinode-20220601103809-7337_multinode-20220601103809-7337-m02.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220601103809-7337 ssh -n multinode-20220601103809-7337 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220601103809-7337 ssh -n multinode-20220601103809-7337-m02 "sudo cat /home/docker/cp-test_multinode-20220601103809-7337_multinode-20220601103809-7337-m02.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220601103809-7337 cp multinode-20220601103809-7337:/home/docker/cp-test.txt multinode-20220601103809-7337-m03:/home/docker/cp-test_multinode-20220601103809-7337_multinode-20220601103809-7337-m03.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220601103809-7337 ssh -n multinode-20220601103809-7337 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220601103809-7337 ssh -n multinode-20220601103809-7337-m03 "sudo cat /home/docker/cp-test_multinode-20220601103809-7337_multinode-20220601103809-7337-m03.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220601103809-7337 cp testdata/cp-test.txt multinode-20220601103809-7337-m02:/home/docker/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220601103809-7337 ssh -n multinode-20220601103809-7337-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220601103809-7337 cp multinode-20220601103809-7337-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile567149612/001/cp-test_multinode-20220601103809-7337-m02.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220601103809-7337 ssh -n multinode-20220601103809-7337-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220601103809-7337 cp multinode-20220601103809-7337-m02:/home/docker/cp-test.txt multinode-20220601103809-7337:/home/docker/cp-test_multinode-20220601103809-7337-m02_multinode-20220601103809-7337.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220601103809-7337 ssh -n multinode-20220601103809-7337-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220601103809-7337 ssh -n multinode-20220601103809-7337 "sudo cat /home/docker/cp-test_multinode-20220601103809-7337-m02_multinode-20220601103809-7337.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220601103809-7337 cp multinode-20220601103809-7337-m02:/home/docker/cp-test.txt multinode-20220601103809-7337-m03:/home/docker/cp-test_multinode-20220601103809-7337-m02_multinode-20220601103809-7337-m03.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220601103809-7337 ssh -n multinode-20220601103809-7337-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220601103809-7337 ssh -n multinode-20220601103809-7337-m03 "sudo cat /home/docker/cp-test_multinode-20220601103809-7337-m02_multinode-20220601103809-7337-m03.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220601103809-7337 cp testdata/cp-test.txt multinode-20220601103809-7337-m03:/home/docker/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220601103809-7337 ssh -n multinode-20220601103809-7337-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220601103809-7337 cp multinode-20220601103809-7337-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile567149612/001/cp-test_multinode-20220601103809-7337-m03.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220601103809-7337 ssh -n multinode-20220601103809-7337-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220601103809-7337 cp multinode-20220601103809-7337-m03:/home/docker/cp-test.txt multinode-20220601103809-7337:/home/docker/cp-test_multinode-20220601103809-7337-m03_multinode-20220601103809-7337.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220601103809-7337 ssh -n multinode-20220601103809-7337-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220601103809-7337 ssh -n multinode-20220601103809-7337 "sudo cat /home/docker/cp-test_multinode-20220601103809-7337-m03_multinode-20220601103809-7337.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220601103809-7337 cp multinode-20220601103809-7337-m03:/home/docker/cp-test.txt multinode-20220601103809-7337-m02:/home/docker/cp-test_multinode-20220601103809-7337-m03_multinode-20220601103809-7337-m02.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220601103809-7337 ssh -n multinode-20220601103809-7337-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220601103809-7337 ssh -n multinode-20220601103809-7337-m02 "sudo cat /home/docker/cp-test_multinode-20220601103809-7337-m03_multinode-20220601103809-7337-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.64s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:208: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220601103809-7337 node stop m03
multinode_test.go:208: (dbg) Done: out/minikube-linux-amd64 -p multinode-20220601103809-7337 node stop m03: (1.353592815s)
multinode_test.go:214: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220601103809-7337 status
multinode_test.go:214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20220601103809-7337 status: exit status 7 (433.895148ms)

                                                
                                                
-- stdout --
	multinode-20220601103809-7337
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-20220601103809-7337-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-20220601103809-7337-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:221: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220601103809-7337 status --alsologtostderr
multinode_test.go:221: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20220601103809-7337 status --alsologtostderr: exit status 7 (445.060695ms)

                                                
                                                
-- stdout --
	multinode-20220601103809-7337
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-20220601103809-7337-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-20220601103809-7337-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0601 10:41:51.991585   18124 out.go:296] Setting OutFile to fd 1 ...
	I0601 10:41:51.991691   18124 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 10:41:51.991700   18124 out.go:309] Setting ErrFile to fd 2...
	I0601 10:41:51.991704   18124 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 10:41:51.991823   18124 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/bin
	I0601 10:41:51.992006   18124 out.go:303] Setting JSON to false
	I0601 10:41:51.992025   18124 mustload.go:65] Loading cluster: multinode-20220601103809-7337
	I0601 10:41:51.992409   18124 config.go:178] Loaded profile config "multinode-20220601103809-7337": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0601 10:41:51.992427   18124 status.go:253] checking status of multinode-20220601103809-7337 ...
	I0601 10:41:51.992819   18124 main.go:134] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0601 10:41:51.992871   18124 main.go:134] libmachine: Launching plugin server for driver kvm2
	I0601 10:41:52.007831   18124 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:35943
	I0601 10:41:52.008252   18124 main.go:134] libmachine: () Calling .GetVersion
	I0601 10:41:52.008861   18124 main.go:134] libmachine: Using API Version  1
	I0601 10:41:52.008889   18124 main.go:134] libmachine: () Calling .SetConfigRaw
	I0601 10:41:52.009170   18124 main.go:134] libmachine: () Calling .GetMachineName
	I0601 10:41:52.009363   18124 main.go:134] libmachine: (multinode-20220601103809-7337) Calling .GetState
	I0601 10:41:52.010867   18124 status.go:328] multinode-20220601103809-7337 host status = "Running" (err=<nil>)
	I0601 10:41:52.010879   18124 host.go:66] Checking if "multinode-20220601103809-7337" exists ...
	I0601 10:41:52.011181   18124 main.go:134] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0601 10:41:52.011219   18124 main.go:134] libmachine: Launching plugin server for driver kvm2
	I0601 10:41:52.024964   18124 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:43171
	I0601 10:41:52.025318   18124 main.go:134] libmachine: () Calling .GetVersion
	I0601 10:41:52.025684   18124 main.go:134] libmachine: Using API Version  1
	I0601 10:41:52.025708   18124 main.go:134] libmachine: () Calling .SetConfigRaw
	I0601 10:41:52.025996   18124 main.go:134] libmachine: () Calling .GetMachineName
	I0601 10:41:52.026198   18124 main.go:134] libmachine: (multinode-20220601103809-7337) Calling .GetIP
	I0601 10:41:52.028750   18124 main.go:134] libmachine: (multinode-20220601103809-7337) DBG | domain multinode-20220601103809-7337 has defined MAC address 52:54:00:01:e4:cd in network mk-multinode-20220601103809-7337
	I0601 10:41:52.029123   18124 main.go:134] libmachine: (multinode-20220601103809-7337) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:e4:cd", ip: ""} in network mk-multinode-20220601103809-7337: {Iface:virbr5 ExpiryTime:2022-06-01 11:38:22 +0000 UTC Type:0 Mac:52:54:00:01:e4:cd Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:multinode-20220601103809-7337 Clientid:01:52:54:00:01:e4:cd}
	I0601 10:41:52.029153   18124 main.go:134] libmachine: (multinode-20220601103809-7337) DBG | domain multinode-20220601103809-7337 has defined IP address 192.168.50.245 and MAC address 52:54:00:01:e4:cd in network mk-multinode-20220601103809-7337
	I0601 10:41:52.029302   18124 host.go:66] Checking if "multinode-20220601103809-7337" exists ...
	I0601 10:41:52.029597   18124 main.go:134] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0601 10:41:52.029639   18124 main.go:134] libmachine: Launching plugin server for driver kvm2
	I0601 10:41:52.043566   18124 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:34745
	I0601 10:41:52.043927   18124 main.go:134] libmachine: () Calling .GetVersion
	I0601 10:41:52.044315   18124 main.go:134] libmachine: Using API Version  1
	I0601 10:41:52.044336   18124 main.go:134] libmachine: () Calling .SetConfigRaw
	I0601 10:41:52.044601   18124 main.go:134] libmachine: () Calling .GetMachineName
	I0601 10:41:52.044770   18124 main.go:134] libmachine: (multinode-20220601103809-7337) Calling .DriverName
	I0601 10:41:52.044985   18124 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 10:41:52.045007   18124 main.go:134] libmachine: (multinode-20220601103809-7337) Calling .GetSSHHostname
	I0601 10:41:52.047375   18124 main.go:134] libmachine: (multinode-20220601103809-7337) DBG | domain multinode-20220601103809-7337 has defined MAC address 52:54:00:01:e4:cd in network mk-multinode-20220601103809-7337
	I0601 10:41:52.047788   18124 main.go:134] libmachine: (multinode-20220601103809-7337) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:e4:cd", ip: ""} in network mk-multinode-20220601103809-7337: {Iface:virbr5 ExpiryTime:2022-06-01 11:38:22 +0000 UTC Type:0 Mac:52:54:00:01:e4:cd Iaid: IPaddr:192.168.50.245 Prefix:24 Hostname:multinode-20220601103809-7337 Clientid:01:52:54:00:01:e4:cd}
	I0601 10:41:52.047820   18124 main.go:134] libmachine: (multinode-20220601103809-7337) DBG | domain multinode-20220601103809-7337 has defined IP address 192.168.50.245 and MAC address 52:54:00:01:e4:cd in network mk-multinode-20220601103809-7337
	I0601 10:41:52.047924   18124 main.go:134] libmachine: (multinode-20220601103809-7337) Calling .GetSSHPort
	I0601 10:41:52.048087   18124 main.go:134] libmachine: (multinode-20220601103809-7337) Calling .GetSSHKeyPath
	I0601 10:41:52.048259   18124 main.go:134] libmachine: (multinode-20220601103809-7337) Calling .GetSSHUsername
	I0601 10:41:52.048387   18124 sshutil.go:53] new ssh client: &{IP:192.168.50.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/multinode-20220601103809-7337/id_rsa Username:docker}
	I0601 10:41:52.138836   18124 ssh_runner.go:195] Run: systemctl --version
	I0601 10:41:52.144160   18124 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0601 10:41:52.157006   18124 kubeconfig.go:92] found "multinode-20220601103809-7337" server: "https://192.168.50.245:8443"
	I0601 10:41:52.157036   18124 api_server.go:165] Checking apiserver status ...
	I0601 10:41:52.157078   18124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0601 10:41:52.168703   18124 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2677/cgroup
	I0601 10:41:52.182547   18124 api_server.go:181] apiserver freezer: "9:freezer:/kubepods/burstable/pod62ea1ec1f2254e3e5f8a4f132c4bc66a/ec08f23ae78a5267156161dfec325c9ee8f2ab1b61e12c9b45d23efe4270f1c9"
	I0601 10:41:52.182595   18124 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod62ea1ec1f2254e3e5f8a4f132c4bc66a/ec08f23ae78a5267156161dfec325c9ee8f2ab1b61e12c9b45d23efe4270f1c9/freezer.state
	I0601 10:41:52.202203   18124 api_server.go:203] freezer state: "THAWED"
	I0601 10:41:52.202225   18124 api_server.go:240] Checking apiserver healthz at https://192.168.50.245:8443/healthz ...
	I0601 10:41:52.208097   18124 api_server.go:266] https://192.168.50.245:8443/healthz returned 200:
	ok
	I0601 10:41:52.208114   18124 status.go:419] multinode-20220601103809-7337 apiserver status = Running (err=<nil>)
	I0601 10:41:52.208136   18124 status.go:255] multinode-20220601103809-7337 status: &{Name:multinode-20220601103809-7337 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0601 10:41:52.208164   18124 status.go:253] checking status of multinode-20220601103809-7337-m02 ...
	I0601 10:41:52.208539   18124 main.go:134] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0601 10:41:52.208602   18124 main.go:134] libmachine: Launching plugin server for driver kvm2
	I0601 10:41:52.223283   18124 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:39837
	I0601 10:41:52.223727   18124 main.go:134] libmachine: () Calling .GetVersion
	I0601 10:41:52.224183   18124 main.go:134] libmachine: Using API Version  1
	I0601 10:41:52.224214   18124 main.go:134] libmachine: () Calling .SetConfigRaw
	I0601 10:41:52.224564   18124 main.go:134] libmachine: () Calling .GetMachineName
	I0601 10:41:52.224760   18124 main.go:134] libmachine: (multinode-20220601103809-7337-m02) Calling .GetState
	I0601 10:41:52.226269   18124 status.go:328] multinode-20220601103809-7337-m02 host status = "Running" (err=<nil>)
	I0601 10:41:52.226286   18124 host.go:66] Checking if "multinode-20220601103809-7337-m02" exists ...
	I0601 10:41:52.226577   18124 main.go:134] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0601 10:41:52.226619   18124 main.go:134] libmachine: Launching plugin server for driver kvm2
	I0601 10:41:52.240739   18124 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:44221
	I0601 10:41:52.241082   18124 main.go:134] libmachine: () Calling .GetVersion
	I0601 10:41:52.241478   18124 main.go:134] libmachine: Using API Version  1
	I0601 10:41:52.241497   18124 main.go:134] libmachine: () Calling .SetConfigRaw
	I0601 10:41:52.241750   18124 main.go:134] libmachine: () Calling .GetMachineName
	I0601 10:41:52.241937   18124 main.go:134] libmachine: (multinode-20220601103809-7337-m02) Calling .GetIP
	I0601 10:41:52.244227   18124 main.go:134] libmachine: (multinode-20220601103809-7337-m02) DBG | domain multinode-20220601103809-7337-m02 has defined MAC address 52:54:00:98:0a:4b in network mk-multinode-20220601103809-7337
	I0601 10:41:52.244591   18124 main.go:134] libmachine: (multinode-20220601103809-7337-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:0a:4b", ip: ""} in network mk-multinode-20220601103809-7337: {Iface:virbr5 ExpiryTime:2022-06-01 11:39:50 +0000 UTC Type:0 Mac:52:54:00:98:0a:4b Iaid: IPaddr:192.168.50.20 Prefix:24 Hostname:multinode-20220601103809-7337-m02 Clientid:01:52:54:00:98:0a:4b}
	I0601 10:41:52.244642   18124 main.go:134] libmachine: (multinode-20220601103809-7337-m02) DBG | domain multinode-20220601103809-7337-m02 has defined IP address 192.168.50.20 and MAC address 52:54:00:98:0a:4b in network mk-multinode-20220601103809-7337
	I0601 10:41:52.244729   18124 host.go:66] Checking if "multinode-20220601103809-7337-m02" exists ...
	I0601 10:41:52.245019   18124 main.go:134] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0601 10:41:52.245052   18124 main.go:134] libmachine: Launching plugin server for driver kvm2
	I0601 10:41:52.258765   18124 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:43041
	I0601 10:41:52.259107   18124 main.go:134] libmachine: () Calling .GetVersion
	I0601 10:41:52.259559   18124 main.go:134] libmachine: Using API Version  1
	I0601 10:41:52.259580   18124 main.go:134] libmachine: () Calling .SetConfigRaw
	I0601 10:41:52.259863   18124 main.go:134] libmachine: () Calling .GetMachineName
	I0601 10:41:52.260036   18124 main.go:134] libmachine: (multinode-20220601103809-7337-m02) Calling .DriverName
	I0601 10:41:52.260209   18124 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0601 10:41:52.260232   18124 main.go:134] libmachine: (multinode-20220601103809-7337-m02) Calling .GetSSHHostname
	I0601 10:41:52.262720   18124 main.go:134] libmachine: (multinode-20220601103809-7337-m02) DBG | domain multinode-20220601103809-7337-m02 has defined MAC address 52:54:00:98:0a:4b in network mk-multinode-20220601103809-7337
	I0601 10:41:52.263100   18124 main.go:134] libmachine: (multinode-20220601103809-7337-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:0a:4b", ip: ""} in network mk-multinode-20220601103809-7337: {Iface:virbr5 ExpiryTime:2022-06-01 11:39:50 +0000 UTC Type:0 Mac:52:54:00:98:0a:4b Iaid: IPaddr:192.168.50.20 Prefix:24 Hostname:multinode-20220601103809-7337-m02 Clientid:01:52:54:00:98:0a:4b}
	I0601 10:41:52.263143   18124 main.go:134] libmachine: (multinode-20220601103809-7337-m02) DBG | domain multinode-20220601103809-7337-m02 has defined IP address 192.168.50.20 and MAC address 52:54:00:98:0a:4b in network mk-multinode-20220601103809-7337
	I0601 10:41:52.263246   18124 main.go:134] libmachine: (multinode-20220601103809-7337-m02) Calling .GetSSHPort
	I0601 10:41:52.263423   18124 main.go:134] libmachine: (multinode-20220601103809-7337-m02) Calling .GetSSHKeyPath
	I0601 10:41:52.263583   18124 main.go:134] libmachine: (multinode-20220601103809-7337-m02) Calling .GetSSHUsername
	I0601 10:41:52.263764   18124 sshutil.go:53] new ssh client: &{IP:192.168.50.20 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/machines/multinode-20220601103809-7337-m02/id_rsa Username:docker}
	I0601 10:41:52.346322   18124 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0601 10:41:52.357808   18124 status.go:255] multinode-20220601103809-7337-m02 status: &{Name:multinode-20220601103809-7337-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0601 10:41:52.357836   18124 status.go:253] checking status of multinode-20220601103809-7337-m03 ...
	I0601 10:41:52.358282   18124 main.go:134] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0601 10:41:52.358329   18124 main.go:134] libmachine: Launching plugin server for driver kvm2
	I0601 10:41:52.373593   18124 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:36801
	I0601 10:41:52.374056   18124 main.go:134] libmachine: () Calling .GetVersion
	I0601 10:41:52.374494   18124 main.go:134] libmachine: Using API Version  1
	I0601 10:41:52.374515   18124 main.go:134] libmachine: () Calling .SetConfigRaw
	I0601 10:41:52.374798   18124 main.go:134] libmachine: () Calling .GetMachineName
	I0601 10:41:52.374969   18124 main.go:134] libmachine: (multinode-20220601103809-7337-m03) Calling .GetState
	I0601 10:41:52.376567   18124 status.go:328] multinode-20220601103809-7337-m03 host status = "Stopped" (err=<nil>)
	I0601 10:41:52.376582   18124 status.go:341] host is not running, skipping remaining checks
	I0601 10:41:52.376587   18124 status.go:255] multinode-20220601103809-7337-m03 status: &{Name:multinode-20220601103809-7337-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.23s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (48.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:252: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220601103809-7337 node start m03 --alsologtostderr
E0601 10:42:00.915200    7337 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/ingress-addon-legacy-20220601103024-7337/client.crt: no such file or directory
E0601 10:42:28.600131    7337 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/ingress-addon-legacy-20220601103024-7337/client.crt: no such file or directory
multinode_test.go:252: (dbg) Done: out/minikube-linux-amd64 -p multinode-20220601103809-7337 node start m03 --alsologtostderr: (47.618852606s)
multinode_test.go:259: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220601103809-7337 status
multinode_test.go:273: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (48.24s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (513.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:281: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-20220601103809-7337
multinode_test.go:288: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-20220601103809-7337
E0601 10:42:46.264913    7337 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/addons-20220601102016-7337/client.crt: no such file or directory
E0601 10:44:09.312036    7337 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/addons-20220601102016-7337/client.crt: no such file or directory
E0601 10:44:31.580471    7337 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/functional-20220601102657-7337/client.crt: no such file or directory
multinode_test.go:288: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-20220601103809-7337: (3m5.219220486s)
multinode_test.go:293: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20220601103809-7337 --wait=true -v=8 --alsologtostderr
E0601 10:47:00.914249    7337 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/ingress-addon-legacy-20220601103024-7337/client.crt: no such file or directory
E0601 10:47:46.264651    7337 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/addons-20220601102016-7337/client.crt: no such file or directory
E0601 10:49:31.580355    7337 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/functional-20220601102657-7337/client.crt: no such file or directory
E0601 10:50:54.623158    7337 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/functional-20220601102657-7337/client.crt: no such file or directory
multinode_test.go:293: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20220601103809-7337 --wait=true -v=8 --alsologtostderr: (5m28.604014652s)
multinode_test.go:298: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-20220601103809-7337
--- PASS: TestMultiNode/serial/RestartKeepsNodes (513.95s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220601103809-7337 node delete m03
multinode_test.go:392: (dbg) Done: out/minikube-linux-amd64 -p multinode-20220601103809-7337 node delete m03: (1.655388739s)
multinode_test.go:398: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220601103809-7337 status --alsologtostderr
multinode_test.go:422: (dbg) Run:  kubectl get nodes
multinode_test.go:430: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.17s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (184.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:312: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220601103809-7337 stop
E0601 10:52:00.914538    7337 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/ingress-addon-legacy-20220601103024-7337/client.crt: no such file or directory
E0601 10:52:46.265046    7337 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/addons-20220601102016-7337/client.crt: no such file or directory
E0601 10:53:23.962730    7337 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/ingress-addon-legacy-20220601103024-7337/client.crt: no such file or directory
multinode_test.go:312: (dbg) Done: out/minikube-linux-amd64 -p multinode-20220601103809-7337 stop: (3m3.977203934s)
multinode_test.go:318: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220601103809-7337 status
multinode_test.go:318: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20220601103809-7337 status: exit status 7 (95.2155ms)

                                                
                                                
-- stdout --
	multinode-20220601103809-7337
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-20220601103809-7337-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220601103809-7337 status --alsologtostderr
multinode_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20220601103809-7337 status --alsologtostderr: exit status 7 (94.594998ms)

                                                
                                                
-- stdout --
	multinode-20220601103809-7337
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-20220601103809-7337-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0601 10:54:20.871324   19416 out.go:296] Setting OutFile to fd 1 ...
	I0601 10:54:20.871523   19416 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 10:54:20.871536   19416 out.go:309] Setting ErrFile to fd 2...
	I0601 10:54:20.871543   19416 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 10:54:20.871643   19416 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/bin
	I0601 10:54:20.871807   19416 out.go:303] Setting JSON to false
	I0601 10:54:20.871828   19416 mustload.go:65] Loading cluster: multinode-20220601103809-7337
	I0601 10:54:20.872180   19416 config.go:178] Loaded profile config "multinode-20220601103809-7337": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0601 10:54:20.872197   19416 status.go:253] checking status of multinode-20220601103809-7337 ...
	I0601 10:54:20.872535   19416 main.go:134] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0601 10:54:20.872588   19416 main.go:134] libmachine: Launching plugin server for driver kvm2
	I0601 10:54:20.886539   19416 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:38769
	I0601 10:54:20.886996   19416 main.go:134] libmachine: () Calling .GetVersion
	I0601 10:54:20.887548   19416 main.go:134] libmachine: Using API Version  1
	I0601 10:54:20.887572   19416 main.go:134] libmachine: () Calling .SetConfigRaw
	I0601 10:54:20.887878   19416 main.go:134] libmachine: () Calling .GetMachineName
	I0601 10:54:20.888101   19416 main.go:134] libmachine: (multinode-20220601103809-7337) Calling .GetState
	I0601 10:54:20.889566   19416 status.go:328] multinode-20220601103809-7337 host status = "Stopped" (err=<nil>)
	I0601 10:54:20.889578   19416 status.go:341] host is not running, skipping remaining checks
	I0601 10:54:20.889586   19416 status.go:255] multinode-20220601103809-7337 status: &{Name:multinode-20220601103809-7337 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0601 10:54:20.889624   19416 status.go:253] checking status of multinode-20220601103809-7337-m02 ...
	I0601 10:54:20.889903   19416 main.go:134] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0601 10:54:20.889946   19416 main.go:134] libmachine: Launching plugin server for driver kvm2
	I0601 10:54:20.903760   19416 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:44409
	I0601 10:54:20.904159   19416 main.go:134] libmachine: () Calling .GetVersion
	I0601 10:54:20.904604   19416 main.go:134] libmachine: Using API Version  1
	I0601 10:54:20.904624   19416 main.go:134] libmachine: () Calling .SetConfigRaw
	I0601 10:54:20.904890   19416 main.go:134] libmachine: () Calling .GetMachineName
	I0601 10:54:20.905161   19416 main.go:134] libmachine: (multinode-20220601103809-7337-m02) Calling .GetState
	I0601 10:54:20.906591   19416 status.go:328] multinode-20220601103809-7337-m02 host status = "Stopped" (err=<nil>)
	I0601 10:54:20.906676   19416 status.go:341] host is not running, skipping remaining checks
	I0601 10:54:20.906685   19416 status.go:255] multinode-20220601103809-7337-m02 status: &{Name:multinode-20220601103809-7337-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (184.17s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (233.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:352: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20220601103809-7337 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E0601 10:54:31.580388    7337 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/functional-20220601102657-7337/client.crt: no such file or directory
E0601 10:57:00.914251    7337 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/ingress-addon-legacy-20220601103024-7337/client.crt: no such file or directory
E0601 10:57:46.264038    7337 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/addons-20220601102016-7337/client.crt: no such file or directory
multinode_test.go:352: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20220601103809-7337 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (3m52.651620636s)
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220601103809-7337 status --alsologtostderr
multinode_test.go:372: (dbg) Run:  kubectl get nodes
multinode_test.go:380: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (233.17s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (62.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:441: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-20220601103809-7337
multinode_test.go:450: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20220601103809-7337-m02 --driver=kvm2  --container-runtime=containerd
multinode_test.go:450: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-20220601103809-7337-m02 --driver=kvm2  --container-runtime=containerd: exit status 14 (80.478706ms)

                                                
                                                
-- stdout --
	* [multinode-20220601103809-7337-m02] minikube v1.26.0-beta.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=14079
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-20220601103809-7337-m02' is duplicated with machine name 'multinode-20220601103809-7337-m02' in profile 'multinode-20220601103809-7337'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:458: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20220601103809-7337-m03 --driver=kvm2  --container-runtime=containerd
multinode_test.go:458: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20220601103809-7337-m03 --driver=kvm2  --container-runtime=containerd: (1m1.199217139s)
multinode_test.go:465: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-20220601103809-7337
multinode_test.go:465: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-20220601103809-7337: exit status 80 (237.415659ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-20220601103809-7337
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: Node multinode-20220601103809-7337-m03 already exists in multinode-20220601103809-7337-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:470: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-20220601103809-7337-m03
multinode_test.go:470: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-20220601103809-7337-m03: (1.251003455s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (62.83s)

                                                
                                    
x
+
TestPreload (174.82s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:48: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-20220601105919-7337 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.17.0
E0601 10:59:31.581300    7337 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/functional-20220601102657-7337/client.crt: no such file or directory
E0601 11:00:49.313492    7337 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/addons-20220601102016-7337/client.crt: no such file or directory
preload_test.go:48: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-20220601105919-7337 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.17.0: (2m8.036363151s)
preload_test.go:61: (dbg) Run:  out/minikube-linux-amd64 ssh -p test-preload-20220601105919-7337 -- sudo crictl pull gcr.io/k8s-minikube/busybox
preload_test.go:61: (dbg) Done: out/minikube-linux-amd64 ssh -p test-preload-20220601105919-7337 -- sudo crictl pull gcr.io/k8s-minikube/busybox: (1.855409107s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-20220601105919-7337 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.17.3
E0601 11:02:00.914035    7337 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/ingress-addon-legacy-20220601103024-7337/client.crt: no such file or directory
preload_test.go:71: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-20220601105919-7337 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.17.3: (43.275366556s)
preload_test.go:80: (dbg) Run:  out/minikube-linux-amd64 ssh -p test-preload-20220601105919-7337 -- sudo crictl image ls
helpers_test.go:175: Cleaning up "test-preload-20220601105919-7337" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-20220601105919-7337
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-20220601105919-7337: (1.413588172s)
--- PASS: TestPreload (174.82s)

                                                
                                    
x
+
TestScheduledStopUnix (132.45s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-20220601110214-7337 --memory=2048 --driver=kvm2  --container-runtime=containerd
E0601 11:02:46.265688    7337 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/addons-20220601102016-7337/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-20220601110214-7337 --memory=2048 --driver=kvm2  --container-runtime=containerd: (1m0.398262996s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20220601110214-7337 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-20220601110214-7337 -n scheduled-stop-20220601110214-7337
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20220601110214-7337 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20220601110214-7337 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20220601110214-7337 -n scheduled-stop-20220601110214-7337
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-20220601110214-7337
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20220601110214-7337 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-20220601110214-7337
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-20220601110214-7337: exit status 7 (76.444981ms)

                                                
                                                
-- stdout --
	scheduled-stop-20220601110214-7337
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20220601110214-7337 -n scheduled-stop-20220601110214-7337
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20220601110214-7337 -n scheduled-stop-20220601110214-7337: exit status 7 (78.481644ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-20220601110214-7337" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-20220601110214-7337
--- PASS: TestScheduledStopUnix (132.45s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (160.83s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:127: (dbg) Run:  /tmp/minikube-v1.16.0.2318647764.exe start -p running-upgrade-20220601110426-7337 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:127: (dbg) Done: /tmp/minikube-v1.16.0.2318647764.exe start -p running-upgrade-20220601110426-7337 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd: (1m57.510309149s)
version_upgrade_test.go:137: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-20220601110426-7337 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
E0601 11:07:00.914395    7337 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/ingress-addon-legacy-20220601103024-7337/client.crt: no such file or directory
version_upgrade_test.go:137: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-20220601110426-7337 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (41.354305313s)
helpers_test.go:175: Cleaning up "running-upgrade-20220601110426-7337" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-20220601110426-7337
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-20220601110426-7337: (1.487631189s)
--- PASS: TestRunningBinaryUpgrade (160.83s)

                                                
                                    
x
+
TestKubernetesUpgrade (245.04s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:229: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-20220601110426-7337 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:229: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-20220601110426-7337 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m37.175383329s)
version_upgrade_test.go:234: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-20220601110426-7337
version_upgrade_test.go:234: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-20220601110426-7337: (2.099043425s)
version_upgrade_test.go:239: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-20220601110426-7337 status --format={{.Host}}
version_upgrade_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-20220601110426-7337 status --format={{.Host}}: exit status 7 (78.095534ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:241: status error: exit status 7 (may be ok)
version_upgrade_test.go:250: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-20220601110426-7337 --memory=2200 --kubernetes-version=v1.23.6 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:250: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-20220601110426-7337 --memory=2200 --kubernetes-version=v1.23.6 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (2m1.053664457s)
version_upgrade_test.go:255: (dbg) Run:  kubectl --context kubernetes-upgrade-20220601110426-7337 version --output=json
version_upgrade_test.go:274: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:276: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-20220601110426-7337 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:276: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-20220601110426-7337 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2  --container-runtime=containerd: exit status 106 (102.543605ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-20220601110426-7337] minikube v1.26.0-beta.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=14079
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.23.6 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-20220601110426-7337
	    minikube start -p kubernetes-upgrade-20220601110426-7337 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-20220601110426-73372 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.23.6, by running:
	    
	    minikube start -p kubernetes-upgrade-20220601110426-7337 --kubernetes-version=v1.23.6
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:280: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:282: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-20220601110426-7337 --memory=2200 --kubernetes-version=v1.23.6 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:282: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-20220601110426-7337 --memory=2200 --kubernetes-version=v1.23.6 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (23.043484892s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-20220601110426-7337" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-20220601110426-7337
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-20220601110426-7337: (1.437147536s)
--- PASS: TestKubernetesUpgrade (245.04s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.58s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.58s)

                                                
                                    
x
+
TestPause/serial/Start (94.43s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-20220601110620-7337 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=containerd

                                                
                                                
=== CONT  TestPause/serial/Start
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-20220601110620-7337 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=containerd: (1m34.430040582s)
--- PASS: TestPause/serial/Start (94.43s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-20220601110707-7337 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-20220601110707-7337 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=containerd: exit status 14 (89.793513ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-20220601110707-7337] minikube v1.26.0-beta.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=14079
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (66.77s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-20220601110707-7337 --driver=kvm2  --container-runtime=containerd

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-20220601110707-7337 --driver=kvm2  --container-runtime=containerd: (1m6.445912053s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-20220601110707-7337 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (66.77s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (10.74s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-20220601110707-7337 --no-kubernetes --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-20220601110707-7337 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (9.190710555s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-20220601110707-7337 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-20220601110707-7337 status -o json: exit status 2 (238.008573ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-20220601110707-7337","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-20220601110707-7337
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-20220601110707-7337: (1.313116065s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (10.74s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (26.59s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-20220601110707-7337 --no-kubernetes --driver=kvm2  --container-runtime=containerd

                                                
                                                
=== CONT  TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-20220601110707-7337 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (26.588694754s)
--- PASS: TestNoKubernetes/serial/Start (26.59s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:220: (dbg) Run:  out/minikube-linux-amd64 start -p false-20220601110831-7337 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=containerd
net_test.go:220: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-20220601110831-7337 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=containerd: exit status 14 (129.632354ms)

                                                
                                                
-- stdout --
	* [false-20220601110831-7337] minikube v1.26.0-beta.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=14079
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0601 11:08:31.958466   24244 out.go:296] Setting OutFile to fd 1 ...
	I0601 11:08:31.958668   24244 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:08:31.958681   24244 out.go:309] Setting ErrFile to fd 2...
	I0601 11:08:31.958687   24244 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0601 11:08:31.958838   24244 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/bin
	I0601 11:08:31.959203   24244 out.go:303] Setting JSON to false
	I0601 11:08:31.960407   24244 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3066,"bootTime":1654078646,"procs":273,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1027-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0601 11:08:31.960484   24244 start.go:125] virtualization: kvm guest
	I0601 11:08:31.963463   24244 out.go:177] * [false-20220601110831-7337] minikube v1.26.0-beta.1 on Ubuntu 20.04 (kvm/amd64)
	I0601 11:08:31.965309   24244 out.go:177]   - MINIKUBE_LOCATION=14079
	I0601 11:08:31.965282   24244 notify.go:193] Checking for updates...
	I0601 11:08:31.966865   24244 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0601 11:08:31.968377   24244 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/kubeconfig
	I0601 11:08:31.969848   24244 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube
	I0601 11:08:31.971298   24244 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0601 11:08:31.973181   24244 config.go:178] Loaded profile config "NoKubernetes-20220601110707-7337": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v0.0.0
	I0601 11:08:31.973327   24244 config.go:178] Loaded profile config "pause-20220601110620-7337": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.23.6
	I0601 11:08:31.973443   24244 config.go:178] Loaded profile config "stopped-upgrade-20220601110426-7337": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0601 11:08:31.973502   24244 driver.go:358] Setting default libvirt URI to qemu:///system
	I0601 11:08:32.015007   24244 out.go:177] * Using the kvm2 driver based on user configuration
	I0601 11:08:32.016699   24244 start.go:284] selected driver: kvm2
	I0601 11:08:32.016712   24244 start.go:806] validating driver "kvm2" against <nil>
	I0601 11:08:32.016726   24244 start.go:817] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0601 11:08:32.018660   24244 out.go:177] 
	W0601 11:08:32.019825   24244 out.go:239] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0601 11:08:32.021158   24244 out.go:177] 

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "false-20220601110831-7337" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-20220601110831-7337
--- PASS: TestNetworkPlugins/group/false (0.37s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-20220601110707-7337 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-20220601110707-7337 "sudo systemctl is-active --quiet service kubelet": exit status 1 (213.536973ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.69s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.69s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-20220601110707-7337
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-20220601110707-7337: (1.227166694s)
--- PASS: TestNoKubernetes/serial/Stop (1.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (66.96s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-20220601110707-7337 --driver=kvm2  --container-runtime=containerd

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-20220601110707-7337 --driver=kvm2  --container-runtime=containerd: (1m6.963660168s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (66.96s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.58s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:213: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-20220601110426-7337
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.58s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-20220601110707-7337 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-20220601110707-7337 "sudo systemctl is-active --quiet service kubelet": exit status 1 (211.373817ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (183.51s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:188: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-20220601111001-7337 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.16.0
E0601 11:10:03.963659    7337 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/ingress-addon-legacy-20220601103024-7337/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:188: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-20220601111001-7337 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.16.0: (3m3.506072936s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (183.51s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (171.68s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:188: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-20220601111017-7337 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.23.6

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:188: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-20220601111017-7337 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.23.6: (2m51.684361241s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (171.68s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (127.79s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:188: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-20220601111130-7337 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.23.6
E0601 11:12:00.914823    7337 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/ingress-addon-legacy-20220601103024-7337/client.crt: no such file or directory
E0601 11:12:46.264148    7337 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/addons-20220601102016-7337/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:188: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-20220601111130-7337 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.23.6: (2m7.786462316s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (127.79s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.54s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:198: (dbg) Run:  kubectl --context old-k8s-version-20220601111001-7337 create -f testdata/busybox.yaml
start_stop_delete_test.go:198: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [2f5bb1b2-4ac0-4aeb-87f8-355aee167b0d] Pending
helpers_test.go:342: "busybox" [2f5bb1b2-4ac0-4aeb-87f8-355aee167b0d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/DeployApp
helpers_test.go:342: "busybox" [2f5bb1b2-4ac0-4aeb-87f8-355aee167b0d] Running

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:198: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.031659393s
start_stop_delete_test.go:198: (dbg) Run:  kubectl --context old-k8s-version-20220601111001-7337 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.54s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.48s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:198: (dbg) Run:  kubectl --context embed-certs-20220601111017-7337 create -f testdata/busybox.yaml
start_stop_delete_test.go:198: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [88801531-4323-46a1-93eb-42b8a2123459] Pending

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/DeployApp
helpers_test.go:342: "busybox" [88801531-4323-46a1-93eb-42b8a2123459] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [88801531-4323-46a1-93eb-42b8a2123459] Running

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:198: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.014648575s
start_stop_delete_test.go:198: (dbg) Run:  kubectl --context embed-certs-20220601111017-7337 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.48s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.67s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:207: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-20220601111001-7337 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:217: (dbg) Run:  kubectl --context old-k8s-version-20220601111001-7337 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.67s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (92.37s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:230: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-20220601111001-7337 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:230: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-20220601111001-7337 --alsologtostderr -v=3: (1m32.372578106s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (92.37s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.75s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:207: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-20220601111017-7337 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:217: (dbg) Run:  kubectl --context embed-certs-20220601111017-7337 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.75s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (91.94s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:230: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-20220601111017-7337 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:230: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-20220601111017-7337 --alsologtostderr -v=3: (1m31.943143292s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (91.94s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.45s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:198: (dbg) Run:  kubectl --context no-preload-20220601111130-7337 create -f testdata/busybox.yaml
start_stop_delete_test.go:198: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [2271d453-f014-4ba4-9e1c-cd20615ed9bd] Pending
helpers_test.go:342: "busybox" [2271d453-f014-4ba4-9e1c-cd20615ed9bd] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [2271d453-f014-4ba4-9e1c-cd20615ed9bd] Running
start_stop_delete_test.go:198: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.027610626s
start_stop_delete_test.go:198: (dbg) Run:  kubectl --context no-preload-20220601111130-7337 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.45s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.69s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:207: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-20220601111130-7337 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:217: (dbg) Run:  kubectl --context no-preload-20220601111130-7337 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.69s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (92.47s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:230: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-20220601111130-7337 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:230: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-20220601111130-7337 --alsologtostderr -v=3: (1m32.471822624s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (92.47s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/FirstStart (120.36s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/FirstStart
start_stop_delete_test.go:188: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-different-port-20220601111418-7337 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.23.6
E0601 11:14:31.580787    7337 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/functional-20220601102657-7337/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/FirstStart
start_stop_delete_test.go:188: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-different-port-20220601111418-7337 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.23.6: (2m0.363102231s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/FirstStart (120.36s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20220601111001-7337 -n old-k8s-version-20220601111001-7337
start_stop_delete_test.go:241: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20220601111001-7337 -n old-k8s-version-20220601111001-7337: exit status 7 (80.096898ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:241: status error: exit status 7 (may be ok)
start_stop_delete_test.go:248: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-20220601111001-7337 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (476.41s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-20220601111001-7337 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.16.0

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-20220601111001-7337 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.16.0: (7m56.150700132s)
start_stop_delete_test.go:264: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20220601111001-7337 -n old-k8s-version-20220601111001-7337
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (476.41s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20220601111017-7337 -n embed-certs-20220601111017-7337
start_stop_delete_test.go:241: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20220601111017-7337 -n embed-certs-20220601111017-7337: exit status 7 (110.7923ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:241: status error: exit status 7 (may be ok)
start_stop_delete_test.go:248: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-20220601111017-7337 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (430s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-20220601111017-7337 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.23.6

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-20220601111017-7337 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.23.6: (7m9.696244484s)
start_stop_delete_test.go:264: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20220601111017-7337 -n embed-certs-20220601111017-7337
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (430.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20220601111130-7337 -n no-preload-20220601111130-7337
start_stop_delete_test.go:241: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20220601111130-7337 -n no-preload-20220601111130-7337: exit status 7 (100.405834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:241: status error: exit status 7 (may be ok)
start_stop_delete_test.go:248: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-20220601111130-7337 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (339.73s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-20220601111130-7337 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.23.6

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-20220601111130-7337 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.23.6: (5m39.445087432s)
start_stop_delete_test.go:264: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20220601111130-7337 -n no-preload-20220601111130-7337
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (339.73s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/DeployApp (9.5s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/DeployApp
start_stop_delete_test.go:198: (dbg) Run:  kubectl --context default-k8s-different-port-20220601111418-7337 create -f testdata/busybox.yaml
start_stop_delete_test.go:198: (dbg) TestStartStop/group/default-k8s-different-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [2a59e3e5-5bf6-4137-8a0d-91af0c69adbe] Pending
helpers_test.go:342: "busybox" [2a59e3e5-5bf6-4137-8a0d-91af0c69adbe] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [2a59e3e5-5bf6-4137-8a0d-91af0c69adbe] Running
start_stop_delete_test.go:198: (dbg) TestStartStop/group/default-k8s-different-port/serial/DeployApp: integration-test=busybox healthy within 9.017280445s
start_stop_delete_test.go:198: (dbg) Run:  kubectl --context default-k8s-different-port-20220601111418-7337 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-different-port/serial/DeployApp (9.50s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive (0.71s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:207: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-different-port-20220601111418-7337 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:217: (dbg) Run:  kubectl --context default-k8s-different-port-20220601111418-7337 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive (0.71s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/Stop (92.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/Stop
start_stop_delete_test.go:230: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-different-port-20220601111418-7337 --alsologtostderr -v=3
E0601 11:17:00.914504    7337 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/ingress-addon-legacy-20220601103024-7337/client.crt: no such file or directory
E0601 11:17:29.313861    7337 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/addons-20220601102016-7337/client.crt: no such file or directory
E0601 11:17:46.264849    7337 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/addons-20220601102016-7337/client.crt: no such file or directory
start_stop_delete_test.go:230: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-different-port-20220601111418-7337 --alsologtostderr -v=3: (1m32.37524189s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/Stop (92.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20220601111418-7337 -n default-k8s-different-port-20220601111418-7337
start_stop_delete_test.go:241: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20220601111418-7337 -n default-k8s-different-port-20220601111418-7337: exit status 7 (86.636839ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:241: status error: exit status 7 (may be ok)
start_stop_delete_test.go:248: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-different-port-20220601111418-7337 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/SecondStart (403.05s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-different-port-20220601111418-7337 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.23.6
E0601 11:19:31.580616    7337 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/functional-20220601102657-7337/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-different-port-20220601111418-7337 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.23.6: (6m42.693935491s)
start_stop_delete_test.go:264: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20220601111418-7337 -n default-k8s-different-port-20220601111418-7337
--- PASS: TestStartStop/group/default-k8s-different-port/serial/SecondStart (403.05s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (9.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:276: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-8469778f77-hp94z" [3923ee58-4507-406e-b75b-1a735efc3aff] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:342: "kubernetes-dashboard-8469778f77-hp94z" [3923ee58-4507-406e-b75b-1a735efc3aff] Running
start_stop_delete_test.go:276: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 9.016799231s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (9.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:289: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-8469778f77-hp94z" [3923ee58-4507-406e-b75b-1a735efc3aff] Running
start_stop_delete_test.go:289: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.01081921s
start_stop_delete_test.go:293: (dbg) Run:  kubectl --context no-preload-20220601111130-7337 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:306: (dbg) Run:  out/minikube-linux-amd64 ssh -p no-preload-20220601111130-7337 "sudo crictl images -o json"
start_stop_delete_test.go:306: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.48s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-20220601111130-7337 --alsologtostderr -v=1
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-20220601111130-7337 -n no-preload-20220601111130-7337
start_stop_delete_test.go:313: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-20220601111130-7337 -n no-preload-20220601111130-7337: exit status 2 (258.247372ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:313: status error: exit status 2 (may be ok)
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-20220601111130-7337 -n no-preload-20220601111130-7337
start_stop_delete_test.go:313: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-20220601111130-7337 -n no-preload-20220601111130-7337: exit status 2 (259.062144ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:313: status error: exit status 2 (may be ok)
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-20220601111130-7337 --alsologtostderr -v=1
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-20220601111130-7337 -n no-preload-20220601111130-7337
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-20220601111130-7337 -n no-preload-20220601111130-7337
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.48s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (73.61s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:188: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-20220601112120-7337 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.23.6

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:188: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-20220601112120-7337 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.23.6: (1m13.61170468s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (73.61s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (15.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:276: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-8469778f77-pddpj" [ada42283-9064-43e1-8d70-901e5959b1b7] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0601 11:22:00.913727    7337 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/ingress-addon-legacy-20220601103024-7337/client.crt: no such file or directory
helpers_test.go:342: "kubernetes-dashboard-8469778f77-pddpj" [ada42283-9064-43e1-8d70-901e5959b1b7] Running
start_stop_delete_test.go:276: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 15.015445662s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (15.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:289: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-8469778f77-pddpj" [ada42283-9064-43e1-8d70-901e5959b1b7] Running
start_stop_delete_test.go:289: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.009166237s
start_stop_delete_test.go:293: (dbg) Run:  kubectl --context embed-certs-20220601111017-7337 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:306: (dbg) Run:  out/minikube-linux-amd64 ssh -p embed-certs-20220601111017-7337 "sudo crictl images -o json"
start_stop_delete_test.go:306: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:306: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.37s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-20220601111017-7337 --alsologtostderr -v=1
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-20220601111017-7337 -n embed-certs-20220601111017-7337
start_stop_delete_test.go:313: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-20220601111017-7337 -n embed-certs-20220601111017-7337: exit status 2 (249.357553ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:313: status error: exit status 2 (may be ok)
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-20220601111017-7337 -n embed-certs-20220601111017-7337
start_stop_delete_test.go:313: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-20220601111017-7337 -n embed-certs-20220601111017-7337: exit status 2 (256.613821ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:313: status error: exit status 2 (may be ok)
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-20220601111017-7337 --alsologtostderr -v=1
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-20220601111017-7337 -n embed-certs-20220601111017-7337
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-20220601111017-7337 -n embed-certs-20220601111017-7337
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (115.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p auto-20220601110831-7337 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=kvm2  --container-runtime=containerd

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/Start
net_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p auto-20220601110831-7337 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=kvm2  --container-runtime=containerd: (1m55.262742051s)
--- PASS: TestNetworkPlugins/group/auto/Start (115.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.78s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:207: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-20220601112120-7337 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.78s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (4.13s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:230: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-20220601112120-7337 --alsologtostderr -v=3
start_stop_delete_test.go:230: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-20220601112120-7337 --alsologtostderr -v=3: (4.133922033s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (4.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:241: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20220601112120-7337 -n newest-cni-20220601112120-7337
start_stop_delete_test.go:241: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20220601112120-7337 -n newest-cni-20220601112120-7337: exit status 7 (82.611509ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:241: status error: exit status 7 (may be ok)
start_stop_delete_test.go:248: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-20220601112120-7337 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (81.37s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-20220601112120-7337 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.23.6

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:258: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-20220601112120-7337 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.23.6: (1m21.090675966s)
start_stop_delete_test.go:264: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20220601112120-7337 -n newest-cni-20220601112120-7337
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (81.37s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:276: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-6fb5469cf5-spq9n" [1535d278-acd9-40d4-8e77-9e4aba7c9c70] Running
E0601 11:22:46.264655    7337 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/addons-20220601102016-7337/client.crt: no such file or directory
start_stop_delete_test.go:276: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.016790082s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:289: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-6fb5469cf5-spq9n" [1535d278-acd9-40d4-8e77-9e4aba7c9c70] Running
start_stop_delete_test.go:289: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.007935325s
start_stop_delete_test.go:293: (dbg) Run:  kubectl --context old-k8s-version-20220601111001-7337 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:306: (dbg) Run:  out/minikube-linux-amd64 ssh -p old-k8s-version-20220601111001-7337 "sudo crictl images -o json"
start_stop_delete_test.go:306: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:306: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (4.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-20220601111001-7337 --alsologtostderr -v=1
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-20220601111001-7337 -n old-k8s-version-20220601111001-7337
start_stop_delete_test.go:313: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-20220601111001-7337 -n old-k8s-version-20220601111001-7337: exit status 2 (311.109519ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:313: status error: exit status 2 (may be ok)
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-20220601111001-7337 -n old-k8s-version-20220601111001-7337
start_stop_delete_test.go:313: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-20220601111001-7337 -n old-k8s-version-20220601111001-7337: exit status 2 (321.164128ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:313: status error: exit status 2 (may be ok)
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-20220601111001-7337 --alsologtostderr -v=1
start_stop_delete_test.go:313: (dbg) Done: out/minikube-linux-amd64 unpause -p old-k8s-version-20220601111001-7337 --alsologtostderr -v=1: (1.402596254s)
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-20220601111001-7337 -n old-k8s-version-20220601111001-7337
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-20220601111001-7337 -n old-k8s-version-20220601111001-7337
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (4.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (102.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-20220601110831-7337 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=kvm2  --container-runtime=containerd
E0601 11:23:05.854119    7337 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601111001-7337/client.crt: no such file or directory
E0601 11:23:05.859393    7337 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601111001-7337/client.crt: no such file or directory
E0601 11:23:05.869619    7337 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601111001-7337/client.crt: no such file or directory
E0601 11:23:05.890060    7337 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601111001-7337/client.crt: no such file or directory
E0601 11:23:05.930334    7337 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601111001-7337/client.crt: no such file or directory
E0601 11:23:06.010659    7337 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601111001-7337/client.crt: no such file or directory
E0601 11:23:06.171320    7337 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601111001-7337/client.crt: no such file or directory
E0601 11:23:06.491692    7337 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601111001-7337/client.crt: no such file or directory
E0601 11:23:07.132117    7337 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601111001-7337/client.crt: no such file or directory
E0601 11:23:08.412681    7337 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601111001-7337/client.crt: no such file or directory
E0601 11:23:10.973195    7337 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601111001-7337/client.crt: no such file or directory
E0601 11:23:16.093725    7337 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601111001-7337/client.crt: no such file or directory
E0601 11:23:26.334426    7337 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601111001-7337/client.crt: no such file or directory
E0601 11:23:38.416053    7337 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/no-preload-20220601111130-7337/client.crt: no such file or directory
E0601 11:23:38.422156    7337 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/no-preload-20220601111130-7337/client.crt: no such file or directory
E0601 11:23:38.432397    7337 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/no-preload-20220601111130-7337/client.crt: no such file or directory
E0601 11:23:38.452784    7337 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/no-preload-20220601111130-7337/client.crt: no such file or directory
E0601 11:23:38.493087    7337 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/no-preload-20220601111130-7337/client.crt: no such file or directory
E0601 11:23:38.573419    7337 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/no-preload-20220601111130-7337/client.crt: no such file or directory
E0601 11:23:38.734098    7337 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/no-preload-20220601111130-7337/client.crt: no such file or directory
E0601 11:23:39.054638    7337 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/no-preload-20220601111130-7337/client.crt: no such file or directory
E0601 11:23:39.695787    7337 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/no-preload-20220601111130-7337/client.crt: no such file or directory
E0601 11:23:40.976751    7337 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/no-preload-20220601111130-7337/client.crt: no such file or directory
E0601 11:23:43.537042    7337 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/no-preload-20220601111130-7337/client.crt: no such file or directory
E0601 11:23:46.815069    7337 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601111001-7337/client.crt: no such file or directory
E0601 11:23:48.657897    7337 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/no-preload-20220601111130-7337/client.crt: no such file or directory
E0601 11:23:58.898364    7337 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/no-preload-20220601111130-7337/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/Start
net_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-20220601110831-7337 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=kvm2  --container-runtime=containerd: (1m42.582287839s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (102.58s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:286: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:306: (dbg) Run:  out/minikube-linux-amd64 ssh -p newest-cni-20220601112120-7337 "sudo crictl images -o json"
start_stop_delete_test.go:306: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.16s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-20220601112120-7337 --alsologtostderr -v=1
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-20220601112120-7337 -n newest-cni-20220601112120-7337
start_stop_delete_test.go:313: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-20220601112120-7337 -n newest-cni-20220601112120-7337: exit status 2 (266.511245ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:313: status error: exit status 2 (may be ok)
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-20220601112120-7337 -n newest-cni-20220601112120-7337
start_stop_delete_test.go:313: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-20220601112120-7337 -n newest-cni-20220601112120-7337: exit status 2 (254.019228ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:313: status error: exit status 2 (may be ok)
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-20220601112120-7337 --alsologtostderr -v=1
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-20220601112120-7337 -n newest-cni-20220601112120-7337
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-20220601112120-7337 -n newest-cni-20220601112120-7337
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Start (119.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Start
net_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p cilium-20220601110832-7337 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=kvm2  --container-runtime=containerd
E0601 11:24:14.624398    7337 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/functional-20220601102657-7337/client.crt: no such file or directory
E0601 11:24:19.379349    7337 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/no-preload-20220601111130-7337/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/Start
net_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p cilium-20220601110832-7337 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=kvm2  --container-runtime=containerd: (1m59.156047047s)
--- PASS: TestNetworkPlugins/group/cilium/Start (119.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-20220601110831-7337 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (12.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context auto-20220601110831-7337 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-668db85669-j5c2z" [f00d570b-fdb9-48cc-8cc6-b0e6fde5c9f3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-668db85669-j5c2z" [f00d570b-fdb9-48cc-8cc6-b0e6fde5c9f3] Running
E0601 11:24:27.776232    7337 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601111001-7337/client.crt: no such file or directory
E0601 11:24:31.581127    7337 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/functional-20220601102657-7337/client.crt: no such file or directory
net_test.go:152: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 12.015450574s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (12.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:169: (dbg) Run:  kubectl --context auto-20220601110831-7337 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:188: (dbg) Run:  kubectl --context auto-20220601110831-7337 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:238: (dbg) Run:  kubectl --context auto-20220601110831-7337 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (113.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p calico-20220601110832-7337 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=kvm2  --container-runtime=containerd

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/Start
net_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p calico-20220601110832-7337 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=kvm2  --container-runtime=containerd: (1m53.404955508s)
--- PASS: TestNetworkPlugins/group/calico/Start (113.41s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop (14.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:276: (dbg) TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-8469778f77-csj4d" [cf14dba3-58f4-446b-a246-dce09a301264] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop
helpers_test.go:342: "kubernetes-dashboard-8469778f77-csj4d" [cf14dba3-58f4-446b-a246-dce09a301264] Running

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:276: (dbg) TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 14.018340039s
--- PASS: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop (14.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:109: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:342: "kindnet-fkx9n" [05518a16-d9a2-4fae-b258-c32538203fe6] Running
net_test.go:109: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.025336733s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-20220601110831-7337 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context kindnet-20220601110831-7337 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-668db85669-gwbgj" [5534d53a-df0f-4edb-891f-d00ac1890e29] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/NetCatPod
helpers_test.go:342: "netcat-668db85669-gwbgj" [5534d53a-df0f-4edb-891f-d00ac1890e29] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:152: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.017814231s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.57s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:289: (dbg) TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-8469778f77-csj4d" [cf14dba3-58f4-446b-a246-dce09a301264] Running
E0601 11:25:00.340494    7337 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/no-preload-20220601111130-7337/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:289: (dbg) TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.015297661s
start_stop_delete_test.go:293: (dbg) Run:  kubectl --context default-k8s-different-port-20220601111418-7337 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:169: (dbg) Run:  kubectl --context kindnet-20220601110831-7337 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:188: (dbg) Run:  kubectl --context kindnet-20220601110831-7337 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:238: (dbg) Run:  kubectl --context kindnet-20220601110831-7337 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages (0.74s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:306: (dbg) Run:  out/minikube-linux-amd64 ssh -p default-k8s-different-port-20220601111418-7337 "sudo crictl images -o json"
start_stop_delete_test.go:306: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:306: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages (0.74s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/Pause (3.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/Pause
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-different-port-20220601111418-7337 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/Pause
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20220601111418-7337 -n default-k8s-different-port-20220601111418-7337
start_stop_delete_test.go:313: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20220601111418-7337 -n default-k8s-different-port-20220601111418-7337: exit status 2 (320.695108ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:313: status error: exit status 2 (may be ok)
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-different-port-20220601111418-7337 -n default-k8s-different-port-20220601111418-7337
start_stop_delete_test.go:313: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-different-port-20220601111418-7337 -n default-k8s-different-port-20220601111418-7337: exit status 2 (315.077842ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:313: status error: exit status 2 (may be ok)
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-different-port-20220601111418-7337 --alsologtostderr -v=1
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20220601111418-7337 -n default-k8s-different-port-20220601111418-7337
start_stop_delete_test.go:313: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-different-port-20220601111418-7337 -n default-k8s-different-port-20220601111418-7337
--- PASS: TestStartStop/group/default-k8s-different-port/serial/Pause (3.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (89.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-20220601110832-7337 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=containerd

                                                
                                                
=== CONT  TestNetworkPlugins/group/custom-flannel/Start
net_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-20220601110832-7337 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=containerd: (1m29.595095112s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (89.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (97.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-20220601110831-7337 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=flannel --driver=kvm2  --container-runtime=containerd
E0601 11:25:49.697380    7337 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/old-k8s-version-20220601111001-7337/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/flannel/Start
net_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p flannel-20220601110831-7337 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=flannel --driver=kvm2  --container-runtime=containerd: (1m37.212637427s)
--- PASS: TestNetworkPlugins/group/flannel/Start (97.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/ControllerPod (5.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/ControllerPod
net_test.go:109: (dbg) TestNetworkPlugins/group/cilium/ControllerPod: waiting 10m0s for pods matching "k8s-app=cilium" in namespace "kube-system" ...
helpers_test.go:342: "cilium-4pkkt" [c53b1e51-072e-4e82-acff-e2dcb59d2f5d] Running
net_test.go:109: (dbg) TestNetworkPlugins/group/cilium/ControllerPod: k8s-app=cilium healthy within 5.144221239s
--- PASS: TestNetworkPlugins/group/cilium/ControllerPod (5.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-linux-amd64 ssh -p cilium-20220601110832-7337 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/cilium/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/NetCatPod (13.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context cilium-20220601110832-7337 replace --force -f testdata/netcat-deployment.yaml
net_test.go:138: (dbg) Done: kubectl --context cilium-20220601110832-7337 replace --force -f testdata/netcat-deployment.yaml: (1.445291458s)
net_test.go:152: (dbg) TestNetworkPlugins/group/cilium/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-668db85669-mgq82" [61f20b52-9b61-4231-8705-233f4cba26e6] Pending
helpers_test.go:342: "netcat-668db85669-mgq82" [61f20b52-9b61-4231-8705-233f4cba26e6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-668db85669-mgq82" [61f20b52-9b61-4231-8705-233f4cba26e6] Running
E0601 11:26:19.467613    7337 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601111418-7337/client.crt: no such file or directory
E0601 11:26:19.472878    7337 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601111418-7337/client.crt: no such file or directory
E0601 11:26:19.483103    7337 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601111418-7337/client.crt: no such file or directory
E0601 11:26:19.503410    7337 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601111418-7337/client.crt: no such file or directory
E0601 11:26:19.543895    7337 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601111418-7337/client.crt: no such file or directory
E0601 11:26:19.625072    7337 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601111418-7337/client.crt: no such file or directory
E0601 11:26:19.785544    7337 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601111418-7337/client.crt: no such file or directory
E0601 11:26:20.106042    7337 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601111418-7337/client.crt: no such file or directory
E0601 11:26:20.747113    7337 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601111418-7337/client.crt: no such file or directory
E0601 11:26:22.027421    7337 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601111418-7337/client.crt: no such file or directory
E0601 11:26:22.261231    7337 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/no-preload-20220601111130-7337/client.crt: no such file or directory
net_test.go:152: (dbg) TestNetworkPlugins/group/cilium/NetCatPod: app=netcat healthy within 12.010952508s
--- PASS: TestNetworkPlugins/group/cilium/NetCatPod (13.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/DNS
net_test.go:169: (dbg) Run:  kubectl --context cilium-20220601110832-7337 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/cilium/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Localhost
net_test.go:188: (dbg) Run:  kubectl --context cilium-20220601110832-7337 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/cilium/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/HairPin
net_test.go:238: (dbg) Run:  kubectl --context cilium-20220601110832-7337 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/cilium/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (80.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-20220601110831-7337 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=kvm2  --container-runtime=containerd

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/Start
net_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p bridge-20220601110831-7337 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=kvm2  --container-runtime=containerd: (1m20.706694812s)
--- PASS: TestNetworkPlugins/group/bridge/Start (80.71s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:109: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:342: "calico-node-7zvtx" [6883ca32-b134-4738-a84c-4c24e2fbe2a2] Running
E0601 11:26:29.709226    7337 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601111418-7337/client.crt: no such file or directory
net_test.go:109: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.019738307s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-20220601110832-7337 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (13.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context calico-20220601110832-7337 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-668db85669-cr2wb" [07d1759d-4816-409b-9226-15351efa6778] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/NetCatPod
helpers_test.go:342: "netcat-668db85669-cr2wb" [07d1759d-4816-409b-9226-15351efa6778] Running
E0601 11:26:43.964771    7337 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/ingress-addon-legacy-20220601103024-7337/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/NetCatPod
net_test.go:152: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 13.024592944s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (13.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-20220601110832-7337 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context custom-flannel-20220601110832-7337 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-668db85669-84qnj" [530f6fb0-9822-4f4f-bfe5-921925931bf4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0601 11:26:39.949695    7337 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601111418-7337/client.crt: no such file or directory
helpers_test.go:342: "netcat-668db85669-84qnj" [530f6fb0-9822-4f4f-bfe5-921925931bf4] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:152: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.011435465s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:169: (dbg) Run:  kubectl --context custom-flannel-20220601110832-7337 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:188: (dbg) Run:  kubectl --context custom-flannel-20220601110832-7337 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:238: (dbg) Run:  kubectl --context custom-flannel-20220601110832-7337 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:169: (dbg) Run:  kubectl --context calico-20220601110832-7337 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (78.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-20220601110831-7337 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=kvm2  --container-runtime=containerd

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-20220601110831-7337 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=kvm2  --container-runtime=containerd: (1m18.893878448s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (78.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:188: (dbg) Run:  kubectl --context calico-20220601110832-7337 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:238: (dbg) Run:  kubectl --context calico-20220601110832-7337 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:109: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-system" ...
helpers_test.go:342: "kube-flannel-ds-amd64-l7qr8" [e6556393-c759-4015-b620-265b9be5b313] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:109: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.019143435s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-20220601110831-7337 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (16.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context flannel-20220601110831-7337 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-668db85669-2vd8z" [0c20027b-b13f-48e5-9158-73ab9d508747] Pending
helpers_test.go:342: "netcat-668db85669-2vd8z" [0c20027b-b13f-48e5-9158-73ab9d508747] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0601 11:27:00.430188    7337 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/default-k8s-different-port-20220601111418-7337/client.crt: no such file or directory
E0601 11:27:00.914008    7337 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/ingress-addon-legacy-20220601103024-7337/client.crt: no such file or directory
helpers_test.go:342: "netcat-668db85669-2vd8z" [0c20027b-b13f-48e5-9158-73ab9d508747] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 16.045313236s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (16.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:169: (dbg) Run:  kubectl --context flannel-20220601110831-7337 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:188: (dbg) Run:  kubectl --context flannel-20220601110831-7337 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:238: (dbg) Run:  kubectl --context flannel-20220601110831-7337 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-20220601110831-7337 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context bridge-20220601110831-7337 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-668db85669-8zcv4" [4830ab26-e795-492f-91ff-c09ee01908e8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0601 11:27:46.264449    7337 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-14079-3622-798c4e8fed290cfa318a9fb994a7c6f555db39c1/.minikube/profiles/addons-20220601102016-7337/client.crt: no such file or directory
helpers_test.go:342: "netcat-668db85669-8zcv4" [4830ab26-e795-492f-91ff-c09ee01908e8] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.0081612s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Run:  kubectl --context bridge-20220601110831-7337 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:188: (dbg) Run:  kubectl --context bridge-20220601110831-7337 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:238: (dbg) Run:  kubectl --context bridge-20220601110831-7337 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-20220601110831-7337 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context enable-default-cni-20220601110831-7337 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-668db85669-qvztt" [6db50938-1bce-4f1a-a9e5-d324dceef94d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-668db85669-qvztt" [6db50938-1bce-4f1a-a9e5-d324dceef94d] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.007666878s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-20220601110831-7337 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:188: (dbg) Run:  kubectl --context enable-default-cni-20220601110831-7337 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:238: (dbg) Run:  kubectl --context enable-default-cni-20220601110831-7337 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                    

Test skip (31/287)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:156: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.6/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.6/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.23.6/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.6/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.6/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.23.6/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.6/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.6/kubectl
aaa_download_only_test.go:156: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.23.6/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:214: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:448: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:35: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:455: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:542: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:88: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:88: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:88: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:88: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:88: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:88: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:88: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Only test none driver.
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:291: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:105: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-20220601111130-7337" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-20220601111130-7337
--- SKIP: TestStartStop/group/disable-driver-mounts (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:91: Skipping the test as containerd container runtimes requires CNI
helpers_test.go:175: Cleaning up "kubenet-20220601110831-7337" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-20220601110831-7337
--- SKIP: TestNetworkPlugins/group/kubenet (0.27s)

                                                
                                    
Copied to clipboard