Test Report: KVM_Linux_containerd 18277

                    
                      3b3cd74538400bfa9e43257fd64a7f0f3b029a2d:2024-03-16:33601
                    
                

Test fail (1/333)

Order failed test Duration
366 TestStartStop/group/old-k8s-version/serial/SecondStart 445.56
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (445.56s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-985498 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.20.0
E0316 18:10:16.969161  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/custom-flannel-376648/client.crt: no such file or directory
E0316 18:10:27.209849  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/custom-flannel-376648/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-985498 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.20.0: exit status 80 (7m23.08748036s)

                                                
                                                
-- stdout --
	* [old-k8s-version-985498] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18277
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18277-781196/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18277-781196/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the kvm2 driver based on existing profile
	* Starting "old-k8s-version-985498" primary control-plane node in "old-k8s-version-985498" cluster
	* Restarting existing kvm2 VM for "old-k8s-version-985498" ...
	* Preparing Kubernetes v1.20.0 on containerd 1.7.14 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image registry.k8s.io/echoserver:1.4
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-985498 addons enable metrics-server
	
	* Enabled addons: default-storageclass, metrics-server, storage-provisioner, dashboard
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0316 18:10:14.143143  838136 out.go:291] Setting OutFile to fd 1 ...
	I0316 18:10:14.143493  838136 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0316 18:10:14.143506  838136 out.go:304] Setting ErrFile to fd 2...
	I0316 18:10:14.143511  838136 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0316 18:10:14.143744  838136 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18277-781196/.minikube/bin
	I0316 18:10:14.144360  838136 out.go:298] Setting JSON to false
	I0316 18:10:14.145343  838136 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":85961,"bootTime":1710526653,"procs":210,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0316 18:10:14.145423  838136 start.go:139] virtualization: kvm guest
	I0316 18:10:14.147955  838136 out.go:177] * [old-k8s-version-985498] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0316 18:10:14.149608  838136 out.go:177]   - MINIKUBE_LOCATION=18277
	I0316 18:10:14.149671  838136 notify.go:220] Checking for updates...
	I0316 18:10:14.151140  838136 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0316 18:10:14.152751  838136 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18277-781196/kubeconfig
	I0316 18:10:14.154243  838136 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18277-781196/.minikube
	I0316 18:10:14.155870  838136 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0316 18:10:14.157331  838136 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0316 18:10:14.159117  838136 config.go:182] Loaded profile config "old-k8s-version-985498": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0316 18:10:14.159586  838136 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0316 18:10:14.159671  838136 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 18:10:14.175490  838136 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41697
	I0316 18:10:14.175971  838136 main.go:141] libmachine: () Calling .GetVersion
	I0316 18:10:14.176543  838136 main.go:141] libmachine: Using API Version  1
	I0316 18:10:14.176569  838136 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 18:10:14.178134  838136 main.go:141] libmachine: () Calling .GetMachineName
	I0316 18:10:14.178602  838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .DriverName
	I0316 18:10:14.180531  838136 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0316 18:10:14.181797  838136 driver.go:392] Setting default libvirt URI to qemu:///system
	I0316 18:10:14.182103  838136 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0316 18:10:14.182156  838136 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 18:10:14.197956  838136 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40113
	I0316 18:10:14.198416  838136 main.go:141] libmachine: () Calling .GetVersion
	I0316 18:10:14.199075  838136 main.go:141] libmachine: Using API Version  1
	I0316 18:10:14.199106  838136 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 18:10:14.199479  838136 main.go:141] libmachine: () Calling .GetMachineName
	I0316 18:10:14.199712  838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .DriverName
	I0316 18:10:14.238564  838136 out.go:177] * Using the kvm2 driver based on existing profile
	I0316 18:10:14.239974  838136 start.go:297] selected driver: kvm2
	I0316 18:10:14.240001  838136 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-985498 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-985498 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.233 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] Listen
Address: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0316 18:10:14.240113  838136 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0316 18:10:14.240864  838136 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0316 18:10:14.240952  838136 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18277-781196/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0316 18:10:14.257576  838136 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0316 18:10:14.257978  838136 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0316 18:10:14.258055  838136 cni.go:84] Creating CNI manager for ""
	I0316 18:10:14.258069  838136 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0316 18:10:14.258140  838136 start.go:340] cluster config:
	{Name:old-k8s-version-985498 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-985498 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.233 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0316 18:10:14.258255  838136 iso.go:125] acquiring lock: {Name:mk48d016d8d435147389d59734ec7ed09e828db8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0316 18:10:14.261041  838136 out.go:177] * Starting "old-k8s-version-985498" primary control-plane node in "old-k8s-version-985498" cluster
	I0316 18:10:14.262777  838136 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0316 18:10:14.262860  838136 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18277-781196/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4
	I0316 18:10:14.262878  838136 cache.go:56] Caching tarball of preloaded images
	I0316 18:10:14.263029  838136 preload.go:173] Found /home/jenkins/minikube-integration/18277-781196/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0316 18:10:14.263065  838136 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on containerd
	I0316 18:10:14.263201  838136 profile.go:142] Saving config to /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/old-k8s-version-985498/config.json ...
	I0316 18:10:14.263459  838136 start.go:360] acquireMachinesLock for old-k8s-version-985498: {Name:mkf97f06937f9fa972ee38e81e5f88859912f65f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0316 18:10:20.013308  838136 start.go:364] duration metric: took 5.749789254s to acquireMachinesLock for "old-k8s-version-985498"
	I0316 18:10:20.013370  838136 start.go:96] Skipping create...Using existing machine configuration
	I0316 18:10:20.013379  838136 fix.go:54] fixHost starting: 
	I0316 18:10:20.013803  838136 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0316 18:10:20.013858  838136 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 18:10:20.031278  838136 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43955
	I0316 18:10:20.031799  838136 main.go:141] libmachine: () Calling .GetVersion
	I0316 18:10:20.032415  838136 main.go:141] libmachine: Using API Version  1
	I0316 18:10:20.032442  838136 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 18:10:20.032905  838136 main.go:141] libmachine: () Calling .GetMachineName
	I0316 18:10:20.033170  838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .DriverName
	I0316 18:10:20.033364  838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .GetState
	I0316 18:10:20.035302  838136 fix.go:112] recreateIfNeeded on old-k8s-version-985498: state=Stopped err=<nil>
	I0316 18:10:20.035329  838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .DriverName
	W0316 18:10:20.035499  838136 fix.go:138] unexpected machine state, will restart: <nil>
	I0316 18:10:20.037420  838136 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-985498" ...
	I0316 18:10:20.038678  838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .Start
	I0316 18:10:20.038900  838136 main.go:141] libmachine: (old-k8s-version-985498) Ensuring networks are active...
	I0316 18:10:20.039777  838136 main.go:141] libmachine: (old-k8s-version-985498) Ensuring network default is active
	I0316 18:10:20.040326  838136 main.go:141] libmachine: (old-k8s-version-985498) Ensuring network mk-old-k8s-version-985498 is active
	I0316 18:10:20.040810  838136 main.go:141] libmachine: (old-k8s-version-985498) Getting domain xml...
	I0316 18:10:20.041632  838136 main.go:141] libmachine: (old-k8s-version-985498) Creating domain...
	I0316 18:10:21.312095  838136 main.go:141] libmachine: (old-k8s-version-985498) Waiting to get IP...
	I0316 18:10:21.313052  838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | domain old-k8s-version-985498 has defined MAC address 52:54:00:0d:b3:83 in network mk-old-k8s-version-985498
	I0316 18:10:21.313576  838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | unable to find current IP address of domain old-k8s-version-985498 in network mk-old-k8s-version-985498
	I0316 18:10:21.313666  838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | I0316 18:10:21.313555  838194 retry.go:31] will retry after 222.546171ms: waiting for machine to come up
	I0316 18:10:21.538210  838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | domain old-k8s-version-985498 has defined MAC address 52:54:00:0d:b3:83 in network mk-old-k8s-version-985498
	I0316 18:10:21.538853  838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | unable to find current IP address of domain old-k8s-version-985498 in network mk-old-k8s-version-985498
	I0316 18:10:21.538881  838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | I0316 18:10:21.538822  838194 retry.go:31] will retry after 367.506447ms: waiting for machine to come up
	I0316 18:10:21.908499  838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | domain old-k8s-version-985498 has defined MAC address 52:54:00:0d:b3:83 in network mk-old-k8s-version-985498
	I0316 18:10:21.908979  838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | unable to find current IP address of domain old-k8s-version-985498 in network mk-old-k8s-version-985498
	I0316 18:10:21.909016  838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | I0316 18:10:21.908938  838194 retry.go:31] will retry after 461.629269ms: waiting for machine to come up
	I0316 18:10:22.372647  838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | domain old-k8s-version-985498 has defined MAC address 52:54:00:0d:b3:83 in network mk-old-k8s-version-985498
	I0316 18:10:22.373108  838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | unable to find current IP address of domain old-k8s-version-985498 in network mk-old-k8s-version-985498
	I0316 18:10:22.373139  838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | I0316 18:10:22.373064  838194 retry.go:31] will retry after 477.258709ms: waiting for machine to come up
	I0316 18:10:22.851814  838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | domain old-k8s-version-985498 has defined MAC address 52:54:00:0d:b3:83 in network mk-old-k8s-version-985498
	I0316 18:10:22.852392  838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | unable to find current IP address of domain old-k8s-version-985498 in network mk-old-k8s-version-985498
	I0316 18:10:22.852427  838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | I0316 18:10:22.852331  838194 retry.go:31] will retry after 637.020571ms: waiting for machine to come up
	I0316 18:10:23.491033  838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | domain old-k8s-version-985498 has defined MAC address 52:54:00:0d:b3:83 in network mk-old-k8s-version-985498
	I0316 18:10:23.491555  838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | unable to find current IP address of domain old-k8s-version-985498 in network mk-old-k8s-version-985498
	I0316 18:10:23.491582  838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | I0316 18:10:23.491505  838194 retry.go:31] will retry after 728.820234ms: waiting for machine to come up
	I0316 18:10:24.222364  838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | domain old-k8s-version-985498 has defined MAC address 52:54:00:0d:b3:83 in network mk-old-k8s-version-985498
	I0316 18:10:24.222915  838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | unable to find current IP address of domain old-k8s-version-985498 in network mk-old-k8s-version-985498
	I0316 18:10:24.222950  838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | I0316 18:10:24.222859  838194 retry.go:31] will retry after 816.898868ms: waiting for machine to come up
	I0316 18:10:25.041814  838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | domain old-k8s-version-985498 has defined MAC address 52:54:00:0d:b3:83 in network mk-old-k8s-version-985498
	I0316 18:10:25.042283  838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | unable to find current IP address of domain old-k8s-version-985498 in network mk-old-k8s-version-985498
	I0316 18:10:25.042326  838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | I0316 18:10:25.042230  838194 retry.go:31] will retry after 1.416019769s: waiting for machine to come up
	I0316 18:10:26.460801  838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | domain old-k8s-version-985498 has defined MAC address 52:54:00:0d:b3:83 in network mk-old-k8s-version-985498
	I0316 18:10:26.461519  838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | unable to find current IP address of domain old-k8s-version-985498 in network mk-old-k8s-version-985498
	I0316 18:10:26.461555  838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | I0316 18:10:26.461451  838194 retry.go:31] will retry after 1.622056862s: waiting for machine to come up
	I0316 18:10:28.086109  838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | domain old-k8s-version-985498 has defined MAC address 52:54:00:0d:b3:83 in network mk-old-k8s-version-985498
	I0316 18:10:28.086687  838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | unable to find current IP address of domain old-k8s-version-985498 in network mk-old-k8s-version-985498
	I0316 18:10:28.086720  838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | I0316 18:10:28.086622  838194 retry.go:31] will retry after 1.551263406s: waiting for machine to come up
	I0316 18:10:29.640638  838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | domain old-k8s-version-985498 has defined MAC address 52:54:00:0d:b3:83 in network mk-old-k8s-version-985498
	I0316 18:10:29.641271  838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | unable to find current IP address of domain old-k8s-version-985498 in network mk-old-k8s-version-985498
	I0316 18:10:29.641306  838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | I0316 18:10:29.641207  838194 retry.go:31] will retry after 2.520185817s: waiting for machine to come up
	I0316 18:10:32.162746  838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | domain old-k8s-version-985498 has defined MAC address 52:54:00:0d:b3:83 in network mk-old-k8s-version-985498
	I0316 18:10:32.163393  838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | unable to find current IP address of domain old-k8s-version-985498 in network mk-old-k8s-version-985498
	I0316 18:10:32.163429  838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | I0316 18:10:32.163353  838194 retry.go:31] will retry after 3.218166666s: waiting for machine to come up
	I0316 18:10:35.382893  838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | domain old-k8s-version-985498 has defined MAC address 52:54:00:0d:b3:83 in network mk-old-k8s-version-985498
	I0316 18:10:35.383526  838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | unable to find current IP address of domain old-k8s-version-985498 in network mk-old-k8s-version-985498
	I0316 18:10:35.383559  838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | I0316 18:10:35.383435  838194 retry.go:31] will retry after 4.016596788s: waiting for machine to come up
	I0316 18:10:39.404886  838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | domain old-k8s-version-985498 has defined MAC address 52:54:00:0d:b3:83 in network mk-old-k8s-version-985498
	I0316 18:10:39.405368  838136 main.go:141] libmachine: (old-k8s-version-985498) Found IP for machine: 192.168.61.233
	I0316 18:10:39.405395  838136 main.go:141] libmachine: (old-k8s-version-985498) Reserving static IP address...
	I0316 18:10:39.405413  838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | domain old-k8s-version-985498 has current primary IP address 192.168.61.233 and MAC address 52:54:00:0d:b3:83 in network mk-old-k8s-version-985498
	I0316 18:10:39.405989  838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | found host DHCP lease matching {name: "old-k8s-version-985498", mac: "52:54:00:0d:b3:83", ip: "192.168.61.233"} in network mk-old-k8s-version-985498: {Iface:virbr2 ExpiryTime:2024-03-16 19:10:33 +0000 UTC Type:0 Mac:52:54:00:0d:b3:83 Iaid: IPaddr:192.168.61.233 Prefix:24 Hostname:old-k8s-version-985498 Clientid:01:52:54:00:0d:b3:83}
	I0316 18:10:39.406021  838136 main.go:141] libmachine: (old-k8s-version-985498) Reserved static IP address: 192.168.61.233
	I0316 18:10:39.406042  838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | skip adding static IP to network mk-old-k8s-version-985498 - found existing host DHCP lease matching {name: "old-k8s-version-985498", mac: "52:54:00:0d:b3:83", ip: "192.168.61.233"}
	I0316 18:10:39.406053  838136 main.go:141] libmachine: (old-k8s-version-985498) Waiting for SSH to be available...
	I0316 18:10:39.406068  838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | Getting to WaitForSSH function...
	I0316 18:10:39.407992  838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | domain old-k8s-version-985498 has defined MAC address 52:54:00:0d:b3:83 in network mk-old-k8s-version-985498
	I0316 18:10:39.408342  838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:b3:83", ip: ""} in network mk-old-k8s-version-985498: {Iface:virbr2 ExpiryTime:2024-03-16 19:10:33 +0000 UTC Type:0 Mac:52:54:00:0d:b3:83 Iaid: IPaddr:192.168.61.233 Prefix:24 Hostname:old-k8s-version-985498 Clientid:01:52:54:00:0d:b3:83}
	I0316 18:10:39.408371  838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | domain old-k8s-version-985498 has defined IP address 192.168.61.233 and MAC address 52:54:00:0d:b3:83 in network mk-old-k8s-version-985498
	I0316 18:10:39.408570  838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | Using SSH client type: external
	I0316 18:10:39.408605  838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | Using SSH private key: /home/jenkins/minikube-integration/18277-781196/.minikube/machines/old-k8s-version-985498/id_rsa (-rw-------)
	I0316 18:10:39.408633  838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.233 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18277-781196/.minikube/machines/old-k8s-version-985498/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0316 18:10:39.408643  838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | About to run SSH command:
	I0316 18:10:39.408661  838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | exit 0
	I0316 18:10:39.536204  838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | SSH cmd err, output: <nil>: 
	I0316 18:10:39.536645  838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .GetConfigRaw
	I0316 18:10:39.537326  838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .GetIP
	I0316 18:10:39.539731  838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | domain old-k8s-version-985498 has defined MAC address 52:54:00:0d:b3:83 in network mk-old-k8s-version-985498
	I0316 18:10:39.540108  838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:b3:83", ip: ""} in network mk-old-k8s-version-985498: {Iface:virbr2 ExpiryTime:2024-03-16 19:10:33 +0000 UTC Type:0 Mac:52:54:00:0d:b3:83 Iaid: IPaddr:192.168.61.233 Prefix:24 Hostname:old-k8s-version-985498 Clientid:01:52:54:00:0d:b3:83}
	I0316 18:10:39.540150  838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | domain old-k8s-version-985498 has defined IP address 192.168.61.233 and MAC address 52:54:00:0d:b3:83 in network mk-old-k8s-version-985498
	I0316 18:10:39.540439  838136 profile.go:142] Saving config to /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/old-k8s-version-985498/config.json ...
	I0316 18:10:39.540686  838136 machine.go:94] provisionDockerMachine start ...
	I0316 18:10:39.540707  838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .DriverName
	I0316 18:10:39.540985  838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .GetSSHHostname
	I0316 18:10:39.543626  838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | domain old-k8s-version-985498 has defined MAC address 52:54:00:0d:b3:83 in network mk-old-k8s-version-985498
	I0316 18:10:39.544120  838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:b3:83", ip: ""} in network mk-old-k8s-version-985498: {Iface:virbr2 ExpiryTime:2024-03-16 19:10:33 +0000 UTC Type:0 Mac:52:54:00:0d:b3:83 Iaid: IPaddr:192.168.61.233 Prefix:24 Hostname:old-k8s-version-985498 Clientid:01:52:54:00:0d:b3:83}
	I0316 18:10:39.544151  838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | domain old-k8s-version-985498 has defined IP address 192.168.61.233 and MAC address 52:54:00:0d:b3:83 in network mk-old-k8s-version-985498
	I0316 18:10:39.544228  838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .GetSSHPort
	I0316 18:10:39.544434  838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .GetSSHKeyPath
	I0316 18:10:39.544600  838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .GetSSHKeyPath
	I0316 18:10:39.544778  838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .GetSSHUsername
	I0316 18:10:39.545027  838136 main.go:141] libmachine: Using SSH client type: native
	I0316 18:10:39.545288  838136 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.233 22 <nil> <nil>}
	I0316 18:10:39.545303  838136 main.go:141] libmachine: About to run SSH command:
	hostname
	I0316 18:10:39.660751  838136 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0316 18:10:39.660794  838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .GetMachineName
	I0316 18:10:39.661098  838136 buildroot.go:166] provisioning hostname "old-k8s-version-985498"
	I0316 18:10:39.661127  838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .GetMachineName
	I0316 18:10:39.661364  838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .GetSSHHostname
	I0316 18:10:39.664277  838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | domain old-k8s-version-985498 has defined MAC address 52:54:00:0d:b3:83 in network mk-old-k8s-version-985498
	I0316 18:10:39.664759  838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:b3:83", ip: ""} in network mk-old-k8s-version-985498: {Iface:virbr2 ExpiryTime:2024-03-16 19:10:33 +0000 UTC Type:0 Mac:52:54:00:0d:b3:83 Iaid: IPaddr:192.168.61.233 Prefix:24 Hostname:old-k8s-version-985498 Clientid:01:52:54:00:0d:b3:83}
	I0316 18:10:39.664795  838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | domain old-k8s-version-985498 has defined IP address 192.168.61.233 and MAC address 52:54:00:0d:b3:83 in network mk-old-k8s-version-985498
	I0316 18:10:39.664989  838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .GetSSHPort
	I0316 18:10:39.665210  838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .GetSSHKeyPath
	I0316 18:10:39.665386  838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .GetSSHKeyPath
	I0316 18:10:39.665541  838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .GetSSHUsername
	I0316 18:10:39.665720  838136 main.go:141] libmachine: Using SSH client type: native
	I0316 18:10:39.665961  838136 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.233 22 <nil> <nil>}
	I0316 18:10:39.665977  838136 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-985498 && echo "old-k8s-version-985498" | sudo tee /etc/hostname
	I0316 18:10:39.797378  838136 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-985498
	
	I0316 18:10:39.797416  838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .GetSSHHostname
	I0316 18:10:39.800557  838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | domain old-k8s-version-985498 has defined MAC address 52:54:00:0d:b3:83 in network mk-old-k8s-version-985498
	I0316 18:10:39.800933  838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:b3:83", ip: ""} in network mk-old-k8s-version-985498: {Iface:virbr2 ExpiryTime:2024-03-16 19:10:33 +0000 UTC Type:0 Mac:52:54:00:0d:b3:83 Iaid: IPaddr:192.168.61.233 Prefix:24 Hostname:old-k8s-version-985498 Clientid:01:52:54:00:0d:b3:83}
	I0316 18:10:39.800985  838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | domain old-k8s-version-985498 has defined IP address 192.168.61.233 and MAC address 52:54:00:0d:b3:83 in network mk-old-k8s-version-985498
	I0316 18:10:39.801139  838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .GetSSHPort
	I0316 18:10:39.801364  838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .GetSSHKeyPath
	I0316 18:10:39.801559  838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .GetSSHKeyPath
	I0316 18:10:39.801731  838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .GetSSHUsername
	I0316 18:10:39.801905  838136 main.go:141] libmachine: Using SSH client type: native
	I0316 18:10:39.802103  838136 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.233 22 <nil> <nil>}
	I0316 18:10:39.802120  838136 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-985498' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-985498/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-985498' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0316 18:10:39.926528  838136 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0316 18:10:39.926563  838136 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18277-781196/.minikube CaCertPath:/home/jenkins/minikube-integration/18277-781196/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18277-781196/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18277-781196/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18277-781196/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18277-781196/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18277-781196/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18277-781196/.minikube}
	I0316 18:10:39.926596  838136 buildroot.go:174] setting up certificates
	I0316 18:10:39.926612  838136 provision.go:84] configureAuth start
	I0316 18:10:39.926626  838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .GetMachineName
	I0316 18:10:39.926990  838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .GetIP
	I0316 18:10:39.930056  838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | domain old-k8s-version-985498 has defined MAC address 52:54:00:0d:b3:83 in network mk-old-k8s-version-985498
	I0316 18:10:39.930467  838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:b3:83", ip: ""} in network mk-old-k8s-version-985498: {Iface:virbr2 ExpiryTime:2024-03-16 19:10:33 +0000 UTC Type:0 Mac:52:54:00:0d:b3:83 Iaid: IPaddr:192.168.61.233 Prefix:24 Hostname:old-k8s-version-985498 Clientid:01:52:54:00:0d:b3:83}
	I0316 18:10:39.930501  838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | domain old-k8s-version-985498 has defined IP address 192.168.61.233 and MAC address 52:54:00:0d:b3:83 in network mk-old-k8s-version-985498
	I0316 18:10:39.930679  838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .GetSSHHostname
	I0316 18:10:39.933530  838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | domain old-k8s-version-985498 has defined MAC address 52:54:00:0d:b3:83 in network mk-old-k8s-version-985498
	I0316 18:10:39.933907  838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:b3:83", ip: ""} in network mk-old-k8s-version-985498: {Iface:virbr2 ExpiryTime:2024-03-16 19:10:33 +0000 UTC Type:0 Mac:52:54:00:0d:b3:83 Iaid: IPaddr:192.168.61.233 Prefix:24 Hostname:old-k8s-version-985498 Clientid:01:52:54:00:0d:b3:83}
	I0316 18:10:39.933935  838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | domain old-k8s-version-985498 has defined IP address 192.168.61.233 and MAC address 52:54:00:0d:b3:83 in network mk-old-k8s-version-985498
	I0316 18:10:39.934073  838136 provision.go:143] copyHostCerts
	I0316 18:10:39.934174  838136 exec_runner.go:144] found /home/jenkins/minikube-integration/18277-781196/.minikube/ca.pem, removing ...
	I0316 18:10:39.934194  838136 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18277-781196/.minikube/ca.pem
	I0316 18:10:39.934270  838136 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18277-781196/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18277-781196/.minikube/ca.pem (1082 bytes)
	I0316 18:10:39.934462  838136 exec_runner.go:144] found /home/jenkins/minikube-integration/18277-781196/.minikube/cert.pem, removing ...
	I0316 18:10:39.934480  838136 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18277-781196/.minikube/cert.pem
	I0316 18:10:39.934519  838136 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18277-781196/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18277-781196/.minikube/cert.pem (1123 bytes)
	I0316 18:10:39.934606  838136 exec_runner.go:144] found /home/jenkins/minikube-integration/18277-781196/.minikube/key.pem, removing ...
	I0316 18:10:39.934617  838136 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18277-781196/.minikube/key.pem
	I0316 18:10:39.934644  838136 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18277-781196/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18277-781196/.minikube/key.pem (1675 bytes)
	I0316 18:10:39.934713  838136 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18277-781196/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18277-781196/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18277-781196/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-985498 san=[127.0.0.1 192.168.61.233 localhost minikube old-k8s-version-985498]
	I0316 18:10:40.111602  838136 provision.go:177] copyRemoteCerts
	I0316 18:10:40.111688  838136 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0316 18:10:40.111725  838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .GetSSHHostname
	I0316 18:10:40.114815  838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | domain old-k8s-version-985498 has defined MAC address 52:54:00:0d:b3:83 in network mk-old-k8s-version-985498
	I0316 18:10:40.115275  838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:b3:83", ip: ""} in network mk-old-k8s-version-985498: {Iface:virbr2 ExpiryTime:2024-03-16 19:10:33 +0000 UTC Type:0 Mac:52:54:00:0d:b3:83 Iaid: IPaddr:192.168.61.233 Prefix:24 Hostname:old-k8s-version-985498 Clientid:01:52:54:00:0d:b3:83}
	I0316 18:10:40.115317  838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | domain old-k8s-version-985498 has defined IP address 192.168.61.233 and MAC address 52:54:00:0d:b3:83 in network mk-old-k8s-version-985498
	I0316 18:10:40.115536  838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .GetSSHPort
	I0316 18:10:40.115770  838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .GetSSHKeyPath
	I0316 18:10:40.115974  838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .GetSSHUsername
	I0316 18:10:40.116126  838136 sshutil.go:53] new ssh client: &{IP:192.168.61.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18277-781196/.minikube/machines/old-k8s-version-985498/id_rsa Username:docker}
	I0316 18:10:40.213547  838136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18277-781196/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0316 18:10:40.245020  838136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18277-781196/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0316 18:10:40.278286  838136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18277-781196/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0316 18:10:40.310385  838136 provision.go:87] duration metric: took 383.757716ms to configureAuth
	I0316 18:10:40.310424  838136 buildroot.go:189] setting minikube options for container-runtime
	I0316 18:10:40.310620  838136 config.go:182] Loaded profile config "old-k8s-version-985498": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0316 18:10:40.310632  838136 machine.go:97] duration metric: took 769.932485ms to provisionDockerMachine
	I0316 18:10:40.310641  838136 start.go:293] postStartSetup for "old-k8s-version-985498" (driver="kvm2")
	I0316 18:10:40.310650  838136 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0316 18:10:40.310685  838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .DriverName
	I0316 18:10:40.311113  838136 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0316 18:10:40.311153  838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .GetSSHHostname
	I0316 18:10:40.313816  838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | domain old-k8s-version-985498 has defined MAC address 52:54:00:0d:b3:83 in network mk-old-k8s-version-985498
	I0316 18:10:40.314242  838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:b3:83", ip: ""} in network mk-old-k8s-version-985498: {Iface:virbr2 ExpiryTime:2024-03-16 19:10:33 +0000 UTC Type:0 Mac:52:54:00:0d:b3:83 Iaid: IPaddr:192.168.61.233 Prefix:24 Hostname:old-k8s-version-985498 Clientid:01:52:54:00:0d:b3:83}
	I0316 18:10:40.314273  838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | domain old-k8s-version-985498 has defined IP address 192.168.61.233 and MAC address 52:54:00:0d:b3:83 in network mk-old-k8s-version-985498
	I0316 18:10:40.314463  838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .GetSSHPort
	I0316 18:10:40.314713  838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .GetSSHKeyPath
	I0316 18:10:40.314895  838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .GetSSHUsername
	I0316 18:10:40.315042  838136 sshutil.go:53] new ssh client: &{IP:192.168.61.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18277-781196/.minikube/machines/old-k8s-version-985498/id_rsa Username:docker}
	I0316 18:10:40.403815  838136 ssh_runner.go:195] Run: cat /etc/os-release
	I0316 18:10:40.409451  838136 info.go:137] Remote host: Buildroot 2023.02.9
	I0316 18:10:40.409493  838136 filesync.go:126] Scanning /home/jenkins/minikube-integration/18277-781196/.minikube/addons for local assets ...
	I0316 18:10:40.409577  838136 filesync.go:126] Scanning /home/jenkins/minikube-integration/18277-781196/.minikube/files for local assets ...
	I0316 18:10:40.409678  838136 filesync.go:149] local asset: /home/jenkins/minikube-integration/18277-781196/.minikube/files/etc/ssl/certs/7884422.pem -> 7884422.pem in /etc/ssl/certs
	I0316 18:10:40.409770  838136 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0316 18:10:40.421303  838136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18277-781196/.minikube/files/etc/ssl/certs/7884422.pem --> /etc/ssl/certs/7884422.pem (1708 bytes)
	I0316 18:10:40.452568  838136 start.go:296] duration metric: took 141.910752ms for postStartSetup
	I0316 18:10:40.452624  838136 fix.go:56] duration metric: took 20.439246626s for fixHost
	I0316 18:10:40.452650  838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .GetSSHHostname
	I0316 18:10:40.455622  838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | domain old-k8s-version-985498 has defined MAC address 52:54:00:0d:b3:83 in network mk-old-k8s-version-985498
	I0316 18:10:40.456038  838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:b3:83", ip: ""} in network mk-old-k8s-version-985498: {Iface:virbr2 ExpiryTime:2024-03-16 19:10:33 +0000 UTC Type:0 Mac:52:54:00:0d:b3:83 Iaid: IPaddr:192.168.61.233 Prefix:24 Hostname:old-k8s-version-985498 Clientid:01:52:54:00:0d:b3:83}
	I0316 18:10:40.456075  838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | domain old-k8s-version-985498 has defined IP address 192.168.61.233 and MAC address 52:54:00:0d:b3:83 in network mk-old-k8s-version-985498
	I0316 18:10:40.456316  838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .GetSSHPort
	I0316 18:10:40.456559  838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .GetSSHKeyPath
	I0316 18:10:40.456763  838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .GetSSHKeyPath
	I0316 18:10:40.456999  838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .GetSSHUsername
	I0316 18:10:40.457227  838136 main.go:141] libmachine: Using SSH client type: native
	I0316 18:10:40.457479  838136 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.233 22 <nil> <nil>}
	I0316 18:10:40.457498  838136 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0316 18:10:40.573393  838136 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710612640.549571184
	
	I0316 18:10:40.573420  838136 fix.go:216] guest clock: 1710612640.549571184
	I0316 18:10:40.573430  838136 fix.go:229] Guest: 2024-03-16 18:10:40.549571184 +0000 UTC Remote: 2024-03-16 18:10:40.452629594 +0000 UTC m=+26.360717773 (delta=96.94159ms)
	I0316 18:10:40.573489  838136 fix.go:200] guest clock delta is within tolerance: 96.94159ms
	I0316 18:10:40.573501  838136 start.go:83] releasing machines lock for "old-k8s-version-985498", held for 20.560153338s
	I0316 18:10:40.573547  838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .DriverName
	I0316 18:10:40.573911  838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .GetIP
	I0316 18:10:40.577073  838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | domain old-k8s-version-985498 has defined MAC address 52:54:00:0d:b3:83 in network mk-old-k8s-version-985498
	I0316 18:10:40.577471  838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:b3:83", ip: ""} in network mk-old-k8s-version-985498: {Iface:virbr2 ExpiryTime:2024-03-16 19:10:33 +0000 UTC Type:0 Mac:52:54:00:0d:b3:83 Iaid: IPaddr:192.168.61.233 Prefix:24 Hostname:old-k8s-version-985498 Clientid:01:52:54:00:0d:b3:83}
	I0316 18:10:40.577504  838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | domain old-k8s-version-985498 has defined IP address 192.168.61.233 and MAC address 52:54:00:0d:b3:83 in network mk-old-k8s-version-985498
	I0316 18:10:40.577730  838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .DriverName
	I0316 18:10:40.578282  838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .DriverName
	I0316 18:10:40.578505  838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .DriverName
	I0316 18:10:40.578650  838136 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0316 18:10:40.578701  838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .GetSSHHostname
	I0316 18:10:40.578767  838136 ssh_runner.go:195] Run: cat /version.json
	I0316 18:10:40.578795  838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .GetSSHHostname
	I0316 18:10:40.581653  838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | domain old-k8s-version-985498 has defined MAC address 52:54:00:0d:b3:83 in network mk-old-k8s-version-985498
	I0316 18:10:40.581938  838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | domain old-k8s-version-985498 has defined MAC address 52:54:00:0d:b3:83 in network mk-old-k8s-version-985498
	I0316 18:10:40.582103  838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:b3:83", ip: ""} in network mk-old-k8s-version-985498: {Iface:virbr2 ExpiryTime:2024-03-16 19:10:33 +0000 UTC Type:0 Mac:52:54:00:0d:b3:83 Iaid: IPaddr:192.168.61.233 Prefix:24 Hostname:old-k8s-version-985498 Clientid:01:52:54:00:0d:b3:83}
	I0316 18:10:40.582135  838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | domain old-k8s-version-985498 has defined IP address 192.168.61.233 and MAC address 52:54:00:0d:b3:83 in network mk-old-k8s-version-985498
	I0316 18:10:40.582407  838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .GetSSHPort
	I0316 18:10:40.582409  838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:b3:83", ip: ""} in network mk-old-k8s-version-985498: {Iface:virbr2 ExpiryTime:2024-03-16 19:10:33 +0000 UTC Type:0 Mac:52:54:00:0d:b3:83 Iaid: IPaddr:192.168.61.233 Prefix:24 Hostname:old-k8s-version-985498 Clientid:01:52:54:00:0d:b3:83}
	I0316 18:10:40.582485  838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | domain old-k8s-version-985498 has defined IP address 192.168.61.233 and MAC address 52:54:00:0d:b3:83 in network mk-old-k8s-version-985498
	I0316 18:10:40.582636  838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .GetSSHPort
	I0316 18:10:40.582644  838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .GetSSHKeyPath
	I0316 18:10:40.582931  838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .GetSSHUsername
	I0316 18:10:40.582931  838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .GetSSHKeyPath
	I0316 18:10:40.583100  838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .GetSSHUsername
	I0316 18:10:40.583109  838136 sshutil.go:53] new ssh client: &{IP:192.168.61.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18277-781196/.minikube/machines/old-k8s-version-985498/id_rsa Username:docker}
	I0316 18:10:40.583245  838136 sshutil.go:53] new ssh client: &{IP:192.168.61.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18277-781196/.minikube/machines/old-k8s-version-985498/id_rsa Username:docker}
	I0316 18:10:40.669901  838136 ssh_runner.go:195] Run: systemctl --version
	I0316 18:10:40.699529  838136 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0316 18:10:40.707058  838136 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0316 18:10:40.707154  838136 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0316 18:10:40.730239  838136 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0316 18:10:40.730271  838136 start.go:494] detecting cgroup driver to use...
	I0316 18:10:40.730364  838136 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0316 18:10:40.761933  838136 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0316 18:10:40.781984  838136 docker.go:217] disabling cri-docker service (if available) ...
	I0316 18:10:40.782061  838136 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0316 18:10:40.801506  838136 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0316 18:10:40.819340  838136 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0316 18:10:40.969263  838136 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0316 18:10:41.151776  838136 docker.go:233] disabling docker service ...
	I0316 18:10:41.151862  838136 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0316 18:10:41.170046  838136 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0316 18:10:41.186577  838136 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0316 18:10:41.320488  838136 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0316 18:10:41.451266  838136 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0316 18:10:41.472978  838136 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0316 18:10:41.504957  838136 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I0316 18:10:41.520192  838136 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0316 18:10:41.534403  838136 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0316 18:10:41.534478  838136 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0316 18:10:41.549329  838136 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0316 18:10:41.564261  838136 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0316 18:10:41.578801  838136 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0316 18:10:41.593218  838136 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0316 18:10:41.608880  838136 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0316 18:10:41.624269  838136 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0316 18:10:41.638565  838136 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0316 18:10:41.638657  838136 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0316 18:10:41.658517  838136 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0316 18:10:41.673552  838136 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0316 18:10:41.835260  838136 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0316 18:10:41.871243  838136 start.go:541] Will wait 60s for socket path /run/containerd/containerd.sock
	I0316 18:10:41.871346  838136 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0316 18:10:41.879650  838136 retry.go:31] will retry after 585.266083ms: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I0316 18:10:42.465241  838136 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0316 18:10:42.471699  838136 start.go:562] Will wait 60s for crictl version
	I0316 18:10:42.471794  838136 ssh_runner.go:195] Run: which crictl
	I0316 18:10:42.477964  838136 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0316 18:10:42.526073  838136 start.go:578] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.14
	RuntimeApiVersion:  v1
	I0316 18:10:42.526153  838136 ssh_runner.go:195] Run: containerd --version
	I0316 18:10:42.560338  838136 ssh_runner.go:195] Run: containerd --version
	I0316 18:10:42.593533  838136 out.go:177] * Preparing Kubernetes v1.20.0 on containerd 1.7.14 ...
	I0316 18:10:42.595003  838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .GetIP
	I0316 18:10:42.598356  838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | domain old-k8s-version-985498 has defined MAC address 52:54:00:0d:b3:83 in network mk-old-k8s-version-985498
	I0316 18:10:42.598926  838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:b3:83", ip: ""} in network mk-old-k8s-version-985498: {Iface:virbr2 ExpiryTime:2024-03-16 19:10:33 +0000 UTC Type:0 Mac:52:54:00:0d:b3:83 Iaid: IPaddr:192.168.61.233 Prefix:24 Hostname:old-k8s-version-985498 Clientid:01:52:54:00:0d:b3:83}
	I0316 18:10:42.598994  838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | domain old-k8s-version-985498 has defined IP address 192.168.61.233 and MAC address 52:54:00:0d:b3:83 in network mk-old-k8s-version-985498
	I0316 18:10:42.599201  838136 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0316 18:10:42.606182  838136 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0316 18:10:42.625976  838136 kubeadm.go:877] updating cluster {Name:old-k8s-version-985498 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-985498 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.233 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Networ
k: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0316 18:10:42.626141  838136 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0316 18:10:42.626223  838136 ssh_runner.go:195] Run: sudo crictl images --output json
	I0316 18:10:42.669448  838136 containerd.go:608] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0316 18:10:42.669536  838136 ssh_runner.go:195] Run: which lz4
	I0316 18:10:42.674827  838136 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0316 18:10:42.680325  838136 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0316 18:10:42.680366  838136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18277-781196/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (472503869 bytes)
	I0316 18:10:44.949609  838136 containerd.go:548] duration metric: took 2.274832755s to copy over tarball
	I0316 18:10:44.949734  838136 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0316 18:10:48.512412  838136 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.56263958s)
	I0316 18:10:48.512448  838136 containerd.go:555] duration metric: took 3.562786414s to extract the tarball
	I0316 18:10:48.512460  838136 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0316 18:10:48.576915  838136 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0316 18:10:48.715869  838136 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0316 18:10:48.754638  838136 ssh_runner.go:195] Run: sudo crictl images --output json
	I0316 18:10:48.820562  838136 retry.go:31] will retry after 253.219113ms: sudo crictl images --output json: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-03-16T18:10:48Z" level=fatal msg="validate service connection: validate CRI v1 image API for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	I0316 18:10:49.074051  838136 ssh_runner.go:195] Run: sudo crictl images --output json
	I0316 18:10:49.121260  838136 containerd.go:608] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0316 18:10:49.121296  838136 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0316 18:10:49.121430  838136 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0316 18:10:49.121429  838136 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0316 18:10:49.121429  838136 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0316 18:10:49.121520  838136 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0316 18:10:49.121520  838136 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0316 18:10:49.121525  838136 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0316 18:10:49.121729  838136 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0316 18:10:49.121449  838136 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0316 18:10:49.123357  838136 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0316 18:10:49.123624  838136 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0316 18:10:49.123660  838136 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0316 18:10:49.123687  838136 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0316 18:10:49.123781  838136 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0316 18:10:49.123623  838136 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0316 18:10:49.123627  838136 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0316 18:10:49.123895  838136 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0316 18:10:49.292076  838136 containerd.go:252] Checking existence of image with name "registry.k8s.io/etcd:3.4.13-0" and sha "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934"
	I0316 18:10:49.292145  838136 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
	I0316 18:10:49.315743  838136 containerd.go:252] Checking existence of image with name "registry.k8s.io/kube-scheduler:v1.20.0" and sha "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899"
	I0316 18:10:49.315853  838136 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
	I0316 18:10:49.316284  838136 containerd.go:252] Checking existence of image with name "registry.k8s.io/coredns:1.7.0" and sha "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16"
	I0316 18:10:49.316353  838136 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
	I0316 18:10:49.335326  838136 containerd.go:252] Checking existence of image with name "registry.k8s.io/pause:3.2" and sha "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c"
	I0316 18:10:49.335410  838136 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
	I0316 18:10:49.351065  838136 containerd.go:252] Checking existence of image with name "registry.k8s.io/kube-controller-manager:v1.20.0" and sha "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080"
	I0316 18:10:49.351144  838136 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
	I0316 18:10:49.353042  838136 containerd.go:252] Checking existence of image with name "registry.k8s.io/kube-apiserver:v1.20.0" and sha "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99"
	I0316 18:10:49.353130  838136 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
	I0316 18:10:49.382901  838136 containerd.go:252] Checking existence of image with name "registry.k8s.io/kube-proxy:v1.20.0" and sha "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc"
	I0316 18:10:49.382999  838136 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
	I0316 18:10:49.613076  838136 containerd.go:252] Checking existence of image with name "gcr.io/k8s-minikube/storage-provisioner:v5" and sha "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"
	I0316 18:10:49.613213  838136 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images check
	I0316 18:10:50.173021  838136 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0316 18:10:50.173118  838136 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0316 18:10:50.173039  838136 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0316 18:10:50.173235  838136 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0316 18:10:50.173181  838136 ssh_runner.go:195] Run: which crictl
	I0316 18:10:50.173288  838136 ssh_runner.go:195] Run: which crictl
	I0316 18:10:50.369202  838136 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images check: (1.052814345s)
	I0316 18:10:50.369298  838136 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0316 18:10:50.369376  838136 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0316 18:10:50.369445  838136 ssh_runner.go:195] Run: which crictl
	I0316 18:10:50.846406  838136 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images check: (1.510962988s)
	I0316 18:10:50.846482  838136 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0316 18:10:50.846523  838136 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0316 18:10:50.846578  838136 ssh_runner.go:195] Run: which crictl
	I0316 18:10:50.955793  838136 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images check: (1.604615227s)
	I0316 18:10:50.955872  838136 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0316 18:10:50.955922  838136 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0316 18:10:50.955979  838136 ssh_runner.go:195] Run: which crictl
	I0316 18:10:50.956009  838136 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images check: (1.602852446s)
	I0316 18:10:50.956074  838136 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0316 18:10:50.956114  838136 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0316 18:10:50.956160  838136 ssh_runner.go:195] Run: which crictl
	I0316 18:10:50.956553  838136 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images check: (1.573535089s)
	I0316 18:10:50.956605  838136 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0316 18:10:50.956639  838136 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0316 18:10:50.956689  838136 ssh_runner.go:195] Run: which crictl
	I0316 18:10:50.968854  838136 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images check: (1.355597673s)
	I0316 18:10:50.969003  838136 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0316 18:10:50.969118  838136 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0316 18:10:50.969024  838136 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0316 18:10:50.969047  838136 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0316 18:10:50.969290  838136 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0316 18:10:50.974720  838136 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0316 18:10:50.974812  838136 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0316 18:10:51.167022  838136 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18277-781196/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0316 18:10:51.167036  838136 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18277-781196/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0316 18:10:51.167035  838136 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18277-781196/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0316 18:10:51.167121  838136 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18277-781196/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0316 18:10:51.167166  838136 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18277-781196/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0316 18:10:51.171034  838136 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18277-781196/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0316 18:10:51.171105  838136 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18277-781196/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0316 18:10:51.171167  838136 cache_images.go:92] duration metric: took 2.049852434s to LoadCachedImages
	W0316 18:10:51.171235  838136 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18277-781196/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18277-781196/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	I0316 18:10:51.171248  838136 kubeadm.go:928] updating node { 192.168.61.233 8443 v1.20.0 containerd true true} ...
	I0316 18:10:51.171417  838136 kubeadm.go:940] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=old-k8s-version-985498 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.233
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-985498 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0316 18:10:51.171507  838136 ssh_runner.go:195] Run: sudo crictl info
	I0316 18:10:51.211690  838136 cni.go:84] Creating CNI manager for ""
	I0316 18:10:51.211724  838136 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0316 18:10:51.211740  838136 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0316 18:10:51.211767  838136 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.233 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-985498 NodeName:old-k8s-version-985498 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.233"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.233 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/c
a.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0316 18:10:51.211984  838136 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.233
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "old-k8s-version-985498"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.233
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.233"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0316 18:10:51.212083  838136 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0316 18:10:51.228556  838136 binaries.go:44] Found k8s binaries, skipping transfer
	I0316 18:10:51.228674  838136 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0316 18:10:51.243247  838136 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (444 bytes)
	I0316 18:10:51.269296  838136 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0316 18:10:51.294856  838136 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2131 bytes)
	I0316 18:10:51.318596  838136 ssh_runner.go:195] Run: grep 192.168.61.233	control-plane.minikube.internal$ /etc/hosts
	I0316 18:10:51.324332  838136 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.233	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0316 18:10:51.343249  838136 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0316 18:10:51.481783  838136 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0316 18:10:51.510038  838136 certs.go:68] Setting up /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/old-k8s-version-985498 for IP: 192.168.61.233
	I0316 18:10:51.510076  838136 certs.go:194] generating shared ca certs ...
	I0316 18:10:51.510102  838136 certs.go:226] acquiring lock for ca certs: {Name:mk0c50354a81ee6e126f21f3d5a16214134194fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0316 18:10:51.510322  838136 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18277-781196/.minikube/ca.key
	I0316 18:10:51.510398  838136 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18277-781196/.minikube/proxy-client-ca.key
	I0316 18:10:51.510416  838136 certs.go:256] generating profile certs ...
	I0316 18:10:51.510563  838136 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/old-k8s-version-985498/client.key
	I0316 18:10:51.510652  838136 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/old-k8s-version-985498/apiserver.key.39495394
	I0316 18:10:51.510708  838136 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/old-k8s-version-985498/proxy-client.key
	I0316 18:10:51.510895  838136 certs.go:484] found cert: /home/jenkins/minikube-integration/18277-781196/.minikube/certs/788442.pem (1338 bytes)
	W0316 18:10:51.510939  838136 certs.go:480] ignoring /home/jenkins/minikube-integration/18277-781196/.minikube/certs/788442_empty.pem, impossibly tiny 0 bytes
	I0316 18:10:51.510947  838136 certs.go:484] found cert: /home/jenkins/minikube-integration/18277-781196/.minikube/certs/ca-key.pem (1679 bytes)
	I0316 18:10:51.510974  838136 certs.go:484] found cert: /home/jenkins/minikube-integration/18277-781196/.minikube/certs/ca.pem (1082 bytes)
	I0316 18:10:51.511006  838136 certs.go:484] found cert: /home/jenkins/minikube-integration/18277-781196/.minikube/certs/cert.pem (1123 bytes)
	I0316 18:10:51.511042  838136 certs.go:484] found cert: /home/jenkins/minikube-integration/18277-781196/.minikube/certs/key.pem (1675 bytes)
	I0316 18:10:51.511102  838136 certs.go:484] found cert: /home/jenkins/minikube-integration/18277-781196/.minikube/files/etc/ssl/certs/7884422.pem (1708 bytes)
	I0316 18:10:51.512190  838136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18277-781196/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0316 18:10:51.570699  838136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18277-781196/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0316 18:10:51.611800  838136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18277-781196/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0316 18:10:51.659890  838136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18277-781196/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0316 18:10:51.709400  838136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/old-k8s-version-985498/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0316 18:10:51.755499  838136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/old-k8s-version-985498/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0316 18:10:51.812896  838136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/old-k8s-version-985498/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0316 18:10:51.845974  838136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/old-k8s-version-985498/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0316 18:10:51.879055  838136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18277-781196/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0316 18:10:51.916045  838136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18277-781196/.minikube/certs/788442.pem --> /usr/share/ca-certificates/788442.pem (1338 bytes)
	I0316 18:10:51.950923  838136 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18277-781196/.minikube/files/etc/ssl/certs/7884422.pem --> /usr/share/ca-certificates/7884422.pem (1708 bytes)
	I0316 18:10:51.983369  838136 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0316 18:10:52.009024  838136 ssh_runner.go:195] Run: openssl version
	I0316 18:10:52.016900  838136 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0316 18:10:52.033483  838136 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0316 18:10:52.039694  838136 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 16 16:56 /usr/share/ca-certificates/minikubeCA.pem
	I0316 18:10:52.039802  838136 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0316 18:10:52.047286  838136 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0316 18:10:52.063453  838136 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/788442.pem && ln -fs /usr/share/ca-certificates/788442.pem /etc/ssl/certs/788442.pem"
	I0316 18:10:52.079354  838136 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/788442.pem
	I0316 18:10:52.085657  838136 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 16 17:02 /usr/share/ca-certificates/788442.pem
	I0316 18:10:52.085721  838136 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/788442.pem
	I0316 18:10:52.093263  838136 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/788442.pem /etc/ssl/certs/51391683.0"
	I0316 18:10:52.108530  838136 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7884422.pem && ln -fs /usr/share/ca-certificates/7884422.pem /etc/ssl/certs/7884422.pem"
	I0316 18:10:52.124106  838136 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7884422.pem
	I0316 18:10:52.131740  838136 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 16 17:02 /usr/share/ca-certificates/7884422.pem
	I0316 18:10:52.131825  838136 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7884422.pem
	I0316 18:10:52.141047  838136 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7884422.pem /etc/ssl/certs/3ec20f2e.0"
	I0316 18:10:52.157549  838136 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0316 18:10:52.165808  838136 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0316 18:10:52.173668  838136 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0316 18:10:52.183767  838136 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0316 18:10:52.193964  838136 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0316 18:10:52.204458  838136 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0316 18:10:52.214907  838136 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0316 18:10:52.223094  838136 kubeadm.go:391] StartCluster: {Name:old-k8s-version-985498 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-985498 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.233 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:
Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0316 18:10:52.223233  838136 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0316 18:10:52.223368  838136 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0316 18:10:52.283104  838136 cri.go:89] found id: ""
	I0316 18:10:52.283208  838136 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0316 18:10:52.297855  838136 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0316 18:10:52.297885  838136 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0316 18:10:52.297892  838136 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0316 18:10:52.297948  838136 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0316 18:10:52.312007  838136 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0316 18:10:52.312741  838136 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-985498" does not appear in /home/jenkins/minikube-integration/18277-781196/kubeconfig
	I0316 18:10:52.313164  838136 kubeconfig.go:62] /home/jenkins/minikube-integration/18277-781196/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-985498" cluster setting kubeconfig missing "old-k8s-version-985498" context setting]
	I0316 18:10:52.313996  838136 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18277-781196/kubeconfig: {Name:mke76908283b58e263a226954335fd60fd02692a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0316 18:10:52.315560  838136 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0316 18:10:52.328791  838136 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.233
	I0316 18:10:52.328841  838136 kubeadm.go:1154] stopping kube-system containers ...
	I0316 18:10:52.328860  838136 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0316 18:10:52.328936  838136 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0316 18:10:52.384396  838136 cri.go:89] found id: ""
	I0316 18:10:52.384490  838136 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0316 18:10:52.405530  838136 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0316 18:10:52.422845  838136 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0316 18:10:52.422874  838136 kubeadm.go:156] found existing configuration files:
	
	I0316 18:10:52.422931  838136 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0316 18:10:52.435759  838136 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0316 18:10:52.435862  838136 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0316 18:10:52.448728  838136 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0316 18:10:52.463228  838136 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0316 18:10:52.463318  838136 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0316 18:10:52.476194  838136 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0316 18:10:52.488899  838136 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0316 18:10:52.488997  838136 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0316 18:10:52.502754  838136 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0316 18:10:52.519699  838136 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0316 18:10:52.519801  838136 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0316 18:10:52.537443  838136 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0316 18:10:52.555161  838136 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0316 18:10:52.726314  838136 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0316 18:10:53.471844  838136 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0316 18:10:53.737175  838136 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0316 18:10:53.847785  838136 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0316 18:10:53.967192  838136 api_server.go:52] waiting for apiserver process to appear ...
	I0316 18:10:53.967378  838136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 18:10:54.468173  838136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 18:10:54.967746  838136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 18:10:55.467902  838136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 18:10:55.968049  838136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 18:10:56.467610  838136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 18:10:56.968426  838136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 18:10:57.467602  838136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 18:10:57.967524  838136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 18:10:58.468280  838136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 18:10:58.968219  838136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 18:10:59.467869  838136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 18:10:59.968099  838136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 18:11:00.467595  838136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 18:11:00.968048  838136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 18:11:01.467398  838136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 18:11:01.968323  838136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 18:11:02.467993  838136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 18:11:02.967635  838136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 18:11:03.467602  838136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 18:11:03.967580  838136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 18:11:04.468074  838136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 18:11:04.968250  838136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 18:11:05.467376  838136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 18:11:05.967683  838136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 18:11:06.468018  838136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 18:11:06.967572  838136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 18:11:07.468059  838136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 18:11:07.967500  838136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 18:11:08.467656  838136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 18:11:08.967734  838136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 18:11:09.467594  838136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 18:11:09.968197  838136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 18:11:10.467605  838136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 18:11:10.967628  838136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 18:11:11.467363  838136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 18:11:11.967611  838136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 18:11:12.468445  838136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 18:11:12.968106  838136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 18:11:13.467411  838136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 18:11:13.968224  838136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 18:11:14.467977  838136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 18:11:14.967979  838136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 18:11:15.468293  838136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 18:11:15.968081  838136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 18:11:16.468180  838136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 18:11:16.968339  838136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 18:11:17.468090  838136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 18:11:17.968057  838136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 18:11:18.467469  838136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 18:11:18.968180  838136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 18:11:19.468133  838136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 18:11:19.967667  838136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 18:11:20.467601  838136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 18:11:20.968051  838136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 18:11:21.468076  838136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 18:11:21.967628  838136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 18:11:22.467801  838136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 18:11:22.967632  838136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 18:11:23.467946  838136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 18:11:23.968421  838136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 18:11:24.468452  838136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 18:11:24.968223  838136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 18:11:25.468353  838136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 18:11:25.967603  838136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 18:11:26.468242  838136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 18:11:26.967430  838136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 18:11:27.467842  838136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 18:11:27.967560  838136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 18:11:28.467586  838136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 18:11:28.967716  838136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 18:11:28.984523  838136 api_server.go:72] duration metric: took 35.017328517s to wait for apiserver process to appear ...
	I0316 18:11:28.984560  838136 api_server.go:88] waiting for apiserver healthz status ...
	I0316 18:11:28.984607  838136 api_server.go:253] Checking apiserver healthz at https://192.168.61.233:8443/healthz ...
	I0316 18:11:32.870510  838136 api_server.go:279] https://192.168.61.233:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0316 18:11:32.870552  838136 api_server.go:103] status: https://192.168.61.233:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0316 18:11:32.870575  838136 api_server.go:253] Checking apiserver healthz at https://192.168.61.233:8443/healthz ...
	I0316 18:11:32.913992  838136 api_server.go:279] https://192.168.61.233:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0316 18:11:32.914029  838136 api_server.go:103] status: https://192.168.61.233:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0316 18:11:32.985178  838136 api_server.go:253] Checking apiserver healthz at https://192.168.61.233:8443/healthz ...
	I0316 18:11:33.052130  838136 api_server.go:279] https://192.168.61.233:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0316 18:11:33.052184  838136 api_server.go:103] status: https://192.168.61.233:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0316 18:11:33.485698  838136 api_server.go:253] Checking apiserver healthz at https://192.168.61.233:8443/healthz ...
	I0316 18:11:33.492841  838136 api_server.go:279] https://192.168.61.233:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0316 18:11:33.492885  838136 api_server.go:103] status: https://192.168.61.233:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0316 18:11:33.985533  838136 api_server.go:253] Checking apiserver healthz at https://192.168.61.233:8443/healthz ...
	I0316 18:11:34.009045  838136 api_server.go:279] https://192.168.61.233:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0316 18:11:34.009085  838136 api_server.go:103] status: https://192.168.61.233:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0316 18:11:34.485324  838136 api_server.go:253] Checking apiserver healthz at https://192.168.61.233:8443/healthz ...
	I0316 18:11:34.493652  838136 api_server.go:279] https://192.168.61.233:8443/healthz returned 200:
	ok
	I0316 18:11:34.503212  838136 api_server.go:141] control plane version: v1.20.0
	I0316 18:11:34.503250  838136 api_server.go:131] duration metric: took 5.518681043s to wait for apiserver health ...
	I0316 18:11:34.503263  838136 cni.go:84] Creating CNI manager for ""
	I0316 18:11:34.503272  838136 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0316 18:11:34.504811  838136 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0316 18:11:34.506291  838136 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0316 18:11:34.526346  838136 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0316 18:11:34.557313  838136 system_pods.go:43] waiting for kube-system pods to appear ...
	I0316 18:11:34.567657  838136 system_pods.go:59] 8 kube-system pods found
	I0316 18:11:34.567715  838136 system_pods.go:61] "coredns-74ff55c5b-p8874" [e9f21303-b312-4077-8cdc-aa1f38acf881] Running
	I0316 18:11:34.567724  838136 system_pods.go:61] "etcd-old-k8s-version-985498" [2d58d97d-a406-4bdf-98f3-7456be608d31] Running
	I0316 18:11:34.567730  838136 system_pods.go:61] "kube-apiserver-old-k8s-version-985498" [515faf17-7382-4227-8a1c-d9d7f40dd40b] Running
	I0316 18:11:34.567741  838136 system_pods.go:61] "kube-controller-manager-old-k8s-version-985498" [e2f7c70f-6441-4b0d-914f-22fbea47af98] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0316 18:11:34.567760  838136 system_pods.go:61] "kube-proxy-nvd4k" [daf8607f-2ff3-4d80-b1ed-ca2d24cb6b36] Running
	I0316 18:11:34.567766  838136 system_pods.go:61] "kube-scheduler-old-k8s-version-985498" [197c4d67-dd09-4cfd-91b5-9cfbadab76dc] Running
	I0316 18:11:34.567771  838136 system_pods.go:61] "metrics-server-9975d5f86-xqhk9" [ba5c6fa2-191f-4ae2-8aee-b1075a50b37b] Pending
	I0316 18:11:34.567774  838136 system_pods.go:61] "storage-provisioner" [d89b271f-838a-4592-b128-fcb2a06fc5e9] Running
	I0316 18:11:34.567782  838136 system_pods.go:74] duration metric: took 10.438526ms to wait for pod list to return data ...
	I0316 18:11:34.567800  838136 node_conditions.go:102] verifying NodePressure condition ...
	I0316 18:11:34.581203  838136 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0316 18:11:34.581237  838136 node_conditions.go:123] node cpu capacity is 2
	I0316 18:11:34.581250  838136 node_conditions.go:105] duration metric: took 13.443606ms to run NodePressure ...
	I0316 18:11:34.581319  838136 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0316 18:11:34.942383  838136 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0316 18:11:34.950961  838136 kubeadm.go:733] kubelet initialised
	I0316 18:11:34.950999  838136 kubeadm.go:734] duration metric: took 8.586934ms waiting for restarted kubelet to initialise ...
	I0316 18:11:34.951010  838136 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0316 18:11:34.962246  838136 pod_ready.go:78] waiting up to 4m0s for pod "coredns-74ff55c5b-p8874" in "kube-system" namespace to be "Ready" ...
	I0316 18:11:34.974731  838136 pod_ready.go:97] node "old-k8s-version-985498" hosting pod "coredns-74ff55c5b-p8874" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-985498" has status "Ready":"False"
	I0316 18:11:34.974773  838136 pod_ready.go:81] duration metric: took 12.48904ms for pod "coredns-74ff55c5b-p8874" in "kube-system" namespace to be "Ready" ...
	E0316 18:11:34.974788  838136 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-985498" hosting pod "coredns-74ff55c5b-p8874" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-985498" has status "Ready":"False"
	I0316 18:11:34.974798  838136 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-985498" in "kube-system" namespace to be "Ready" ...
	I0316 18:11:34.981823  838136 pod_ready.go:97] node "old-k8s-version-985498" hosting pod "etcd-old-k8s-version-985498" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-985498" has status "Ready":"False"
	I0316 18:11:34.981862  838136 pod_ready.go:81] duration metric: took 7.047238ms for pod "etcd-old-k8s-version-985498" in "kube-system" namespace to be "Ready" ...
	E0316 18:11:34.981877  838136 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-985498" hosting pod "etcd-old-k8s-version-985498" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-985498" has status "Ready":"False"
	I0316 18:11:34.981886  838136 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-985498" in "kube-system" namespace to be "Ready" ...
	I0316 18:11:34.995159  838136 pod_ready.go:97] node "old-k8s-version-985498" hosting pod "kube-apiserver-old-k8s-version-985498" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-985498" has status "Ready":"False"
	I0316 18:11:34.995193  838136 pod_ready.go:81] duration metric: took 13.296838ms for pod "kube-apiserver-old-k8s-version-985498" in "kube-system" namespace to be "Ready" ...
	E0316 18:11:34.995202  838136 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-985498" hosting pod "kube-apiserver-old-k8s-version-985498" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-985498" has status "Ready":"False"
	I0316 18:11:34.995210  838136 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-985498" in "kube-system" namespace to be "Ready" ...
	I0316 18:11:35.001459  838136 pod_ready.go:97] node "old-k8s-version-985498" hosting pod "kube-controller-manager-old-k8s-version-985498" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-985498" has status "Ready":"False"
	I0316 18:11:35.001499  838136 pod_ready.go:81] duration metric: took 6.27941ms for pod "kube-controller-manager-old-k8s-version-985498" in "kube-system" namespace to be "Ready" ...
	E0316 18:11:35.001514  838136 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-985498" hosting pod "kube-controller-manager-old-k8s-version-985498" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-985498" has status "Ready":"False"
	I0316 18:11:35.001525  838136 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-nvd4k" in "kube-system" namespace to be "Ready" ...
	I0316 18:11:35.361513  838136 pod_ready.go:97] node "old-k8s-version-985498" hosting pod "kube-proxy-nvd4k" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-985498" has status "Ready":"False"
	I0316 18:11:35.361550  838136 pod_ready.go:81] duration metric: took 360.016182ms for pod "kube-proxy-nvd4k" in "kube-system" namespace to be "Ready" ...
	E0316 18:11:35.361564  838136 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-985498" hosting pod "kube-proxy-nvd4k" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-985498" has status "Ready":"False"
	I0316 18:11:35.361573  838136 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-985498" in "kube-system" namespace to be "Ready" ...
	I0316 18:11:35.762838  838136 pod_ready.go:97] node "old-k8s-version-985498" hosting pod "kube-scheduler-old-k8s-version-985498" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-985498" has status "Ready":"False"
	I0316 18:11:35.762878  838136 pod_ready.go:81] duration metric: took 401.293557ms for pod "kube-scheduler-old-k8s-version-985498" in "kube-system" namespace to be "Ready" ...
	E0316 18:11:35.762891  838136 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-985498" hosting pod "kube-scheduler-old-k8s-version-985498" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-985498" has status "Ready":"False"
	I0316 18:11:35.762901  838136 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace to be "Ready" ...
	I0316 18:11:36.161627  838136 pod_ready.go:97] node "old-k8s-version-985498" hosting pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-985498" has status "Ready":"False"
	I0316 18:11:36.161683  838136 pod_ready.go:81] duration metric: took 398.769929ms for pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace to be "Ready" ...
	E0316 18:11:36.161697  838136 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-985498" hosting pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-985498" has status "Ready":"False"
	I0316 18:11:36.161707  838136 pod_ready.go:38] duration metric: took 1.210684392s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0316 18:11:36.161732  838136 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0316 18:11:36.176854  838136 ops.go:34] apiserver oom_adj: -16
	I0316 18:11:36.176886  838136 kubeadm.go:591] duration metric: took 43.878986103s to restartPrimaryControlPlane
	I0316 18:11:36.176899  838136 kubeadm.go:393] duration metric: took 43.953820603s to StartCluster
	I0316 18:11:36.176925  838136 settings.go:142] acquiring lock: {Name:mk5e1e3433840176063e5baa5db7056716046a6c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0316 18:11:36.177083  838136 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18277-781196/kubeconfig
	I0316 18:11:36.178481  838136 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18277-781196/kubeconfig: {Name:mke76908283b58e263a226954335fd60fd02692a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0316 18:11:36.178774  838136 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.61.233 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0316 18:11:36.180502  838136 out.go:177] * Verifying Kubernetes components...
	I0316 18:11:36.178867  838136 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0316 18:11:36.179001  838136 config.go:182] Loaded profile config "old-k8s-version-985498": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0316 18:11:36.182040  838136 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0316 18:11:36.180607  838136 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-985498"
	I0316 18:11:36.182129  838136 addons.go:234] Setting addon storage-provisioner=true in "old-k8s-version-985498"
	W0316 18:11:36.182149  838136 addons.go:243] addon storage-provisioner should already be in state true
	I0316 18:11:36.180618  838136 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-985498"
	I0316 18:11:36.182203  838136 host.go:66] Checking if "old-k8s-version-985498" exists ...
	I0316 18:11:36.182220  838136 addons.go:234] Setting addon metrics-server=true in "old-k8s-version-985498"
	W0316 18:11:36.182237  838136 addons.go:243] addon metrics-server should already be in state true
	I0316 18:11:36.182277  838136 host.go:66] Checking if "old-k8s-version-985498" exists ...
	I0316 18:11:36.180620  838136 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-985498"
	I0316 18:11:36.182380  838136 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-985498"
	I0316 18:11:36.180611  838136 addons.go:69] Setting dashboard=true in profile "old-k8s-version-985498"
	I0316 18:11:36.182474  838136 addons.go:234] Setting addon dashboard=true in "old-k8s-version-985498"
	W0316 18:11:36.182488  838136 addons.go:243] addon dashboard should already be in state true
	I0316 18:11:36.182514  838136 host.go:66] Checking if "old-k8s-version-985498" exists ...
	I0316 18:11:36.182699  838136 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0316 18:11:36.182717  838136 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0316 18:11:36.182734  838136 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0316 18:11:36.182747  838136 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 18:11:36.182751  838136 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 18:11:36.182755  838136 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 18:11:36.183027  838136 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0316 18:11:36.183052  838136 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 18:11:36.200932  838136 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43305
	I0316 18:11:36.201436  838136 main.go:141] libmachine: () Calling .GetVersion
	I0316 18:11:36.202011  838136 main.go:141] libmachine: Using API Version  1
	I0316 18:11:36.202040  838136 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 18:11:36.202468  838136 main.go:141] libmachine: () Calling .GetMachineName
	I0316 18:11:36.202986  838136 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0316 18:11:36.203022  838136 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 18:11:36.204619  838136 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35467
	I0316 18:11:36.205064  838136 main.go:141] libmachine: () Calling .GetVersion
	I0316 18:11:36.205611  838136 main.go:141] libmachine: Using API Version  1
	I0316 18:11:36.205629  838136 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 18:11:36.205992  838136 main.go:141] libmachine: () Calling .GetMachineName
	I0316 18:11:36.206210  838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .GetState
	I0316 18:11:36.209087  838136 addons.go:234] Setting addon default-storageclass=true in "old-k8s-version-985498"
	W0316 18:11:36.209109  838136 addons.go:243] addon default-storageclass should already be in state true
	I0316 18:11:36.209139  838136 host.go:66] Checking if "old-k8s-version-985498" exists ...
	I0316 18:11:36.209413  838136 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0316 18:11:36.209449  838136 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 18:11:36.221952  838136 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37873
	I0316 18:11:36.222542  838136 main.go:141] libmachine: () Calling .GetVersion
	I0316 18:11:36.223131  838136 main.go:141] libmachine: Using API Version  1
	I0316 18:11:36.223167  838136 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 18:11:36.223617  838136 main.go:141] libmachine: () Calling .GetMachineName
	I0316 18:11:36.223833  838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .GetState
	I0316 18:11:36.226013  838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .DriverName
	I0316 18:11:36.228536  838136 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0316 18:11:36.230082  838136 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0316 18:11:36.230114  838136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0316 18:11:36.230150  838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .GetSSHHostname
	I0316 18:11:36.234118  838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | domain old-k8s-version-985498 has defined MAC address 52:54:00:0d:b3:83 in network mk-old-k8s-version-985498
	I0316 18:11:36.234161  838136 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44663
	I0316 18:11:36.234343  838136 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34539
	I0316 18:11:36.234769  838136 main.go:141] libmachine: () Calling .GetVersion
	I0316 18:11:36.234886  838136 main.go:141] libmachine: () Calling .GetVersion
	I0316 18:11:36.235597  838136 main.go:141] libmachine: Using API Version  1
	I0316 18:11:36.235613  838136 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 18:11:36.235675  838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:b3:83", ip: ""} in network mk-old-k8s-version-985498: {Iface:virbr2 ExpiryTime:2024-03-16 19:10:33 +0000 UTC Type:0 Mac:52:54:00:0d:b3:83 Iaid: IPaddr:192.168.61.233 Prefix:24 Hostname:old-k8s-version-985498 Clientid:01:52:54:00:0d:b3:83}
	I0316 18:11:36.235688  838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | domain old-k8s-version-985498 has defined IP address 192.168.61.233 and MAC address 52:54:00:0d:b3:83 in network mk-old-k8s-version-985498
	I0316 18:11:36.235775  838136 main.go:141] libmachine: Using API Version  1
	I0316 18:11:36.235782  838136 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 18:11:36.236050  838136 main.go:141] libmachine: () Calling .GetMachineName
	I0316 18:11:36.236114  838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .GetSSHPort
	I0316 18:11:36.236237  838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .GetSSHKeyPath
	I0316 18:11:36.236276  838136 main.go:141] libmachine: () Calling .GetMachineName
	I0316 18:11:36.236445  838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .GetSSHUsername
	I0316 18:11:36.236691  838136 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0316 18:11:36.236739  838136 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 18:11:36.236781  838136 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0316 18:11:36.236798  838136 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 18:11:36.237065  838136 sshutil.go:53] new ssh client: &{IP:192.168.61.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18277-781196/.minikube/machines/old-k8s-version-985498/id_rsa Username:docker}
	I0316 18:11:36.241687  838136 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34995
	I0316 18:11:36.242296  838136 main.go:141] libmachine: () Calling .GetVersion
	I0316 18:11:36.242865  838136 main.go:141] libmachine: Using API Version  1
	I0316 18:11:36.242884  838136 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 18:11:36.243348  838136 main.go:141] libmachine: () Calling .GetMachineName
	I0316 18:11:36.243986  838136 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0316 18:11:36.244029  838136 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 18:11:36.259433  838136 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33197
	I0316 18:11:36.260193  838136 main.go:141] libmachine: () Calling .GetVersion
	I0316 18:11:36.260357  838136 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37319
	I0316 18:11:36.260722  838136 main.go:141] libmachine: () Calling .GetVersion
	I0316 18:11:36.260954  838136 main.go:141] libmachine: Using API Version  1
	I0316 18:11:36.260974  838136 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 18:11:36.261212  838136 main.go:141] libmachine: Using API Version  1
	I0316 18:11:36.261233  838136 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 18:11:36.261619  838136 main.go:141] libmachine: () Calling .GetMachineName
	I0316 18:11:36.261729  838136 main.go:141] libmachine: () Calling .GetMachineName
	I0316 18:11:36.262042  838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .GetState
	I0316 18:11:36.262194  838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .GetState
	I0316 18:11:36.264661  838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .DriverName
	I0316 18:11:36.264741  838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .DriverName
	I0316 18:11:36.266992  838136 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0316 18:11:36.265746  838136 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33845
	I0316 18:11:36.272723  838136 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0316 18:11:36.271366  838136 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0316 18:11:36.272207  838136 main.go:141] libmachine: () Calling .GetVersion
	I0316 18:11:36.274209  838136 addons.go:426] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0316 18:11:36.274233  838136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0316 18:11:36.274263  838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .GetSSHHostname
	I0316 18:11:36.274877  838136 main.go:141] libmachine: Using API Version  1
	I0316 18:11:36.275899  838136 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 18:11:36.275919  838136 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0316 18:11:36.275941  838136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0316 18:11:36.275967  838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .GetSSHHostname
	I0316 18:11:36.276574  838136 main.go:141] libmachine: () Calling .GetMachineName
	I0316 18:11:36.276891  838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .GetState
	I0316 18:11:36.277922  838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | domain old-k8s-version-985498 has defined MAC address 52:54:00:0d:b3:83 in network mk-old-k8s-version-985498
	I0316 18:11:36.278494  838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:b3:83", ip: ""} in network mk-old-k8s-version-985498: {Iface:virbr2 ExpiryTime:2024-03-16 19:10:33 +0000 UTC Type:0 Mac:52:54:00:0d:b3:83 Iaid: IPaddr:192.168.61.233 Prefix:24 Hostname:old-k8s-version-985498 Clientid:01:52:54:00:0d:b3:83}
	I0316 18:11:36.278530  838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | domain old-k8s-version-985498 has defined IP address 192.168.61.233 and MAC address 52:54:00:0d:b3:83 in network mk-old-k8s-version-985498
	I0316 18:11:36.278688  838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .GetSSHPort
	I0316 18:11:36.278869  838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .GetSSHKeyPath
	I0316 18:11:36.279042  838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .GetSSHUsername
	I0316 18:11:36.279221  838136 sshutil.go:53] new ssh client: &{IP:192.168.61.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18277-781196/.minikube/machines/old-k8s-version-985498/id_rsa Username:docker}
	I0316 18:11:36.280355  838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .DriverName
	I0316 18:11:36.280673  838136 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0316 18:11:36.280698  838136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0316 18:11:36.280718  838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .GetSSHHostname
	I0316 18:11:36.281348  838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | domain old-k8s-version-985498 has defined MAC address 52:54:00:0d:b3:83 in network mk-old-k8s-version-985498
	I0316 18:11:36.281775  838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:b3:83", ip: ""} in network mk-old-k8s-version-985498: {Iface:virbr2 ExpiryTime:2024-03-16 19:10:33 +0000 UTC Type:0 Mac:52:54:00:0d:b3:83 Iaid: IPaddr:192.168.61.233 Prefix:24 Hostname:old-k8s-version-985498 Clientid:01:52:54:00:0d:b3:83}
	I0316 18:11:36.281803  838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | domain old-k8s-version-985498 has defined IP address 192.168.61.233 and MAC address 52:54:00:0d:b3:83 in network mk-old-k8s-version-985498
	I0316 18:11:36.281972  838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .GetSSHPort
	I0316 18:11:36.282163  838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .GetSSHKeyPath
	I0316 18:11:36.282315  838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .GetSSHUsername
	I0316 18:11:36.282453  838136 sshutil.go:53] new ssh client: &{IP:192.168.61.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18277-781196/.minikube/machines/old-k8s-version-985498/id_rsa Username:docker}
	I0316 18:11:36.286939  838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .GetSSHPort
	I0316 18:11:36.286962  838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | domain old-k8s-version-985498 has defined MAC address 52:54:00:0d:b3:83 in network mk-old-k8s-version-985498
	I0316 18:11:36.286993  838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:b3:83", ip: ""} in network mk-old-k8s-version-985498: {Iface:virbr2 ExpiryTime:2024-03-16 19:10:33 +0000 UTC Type:0 Mac:52:54:00:0d:b3:83 Iaid: IPaddr:192.168.61.233 Prefix:24 Hostname:old-k8s-version-985498 Clientid:01:52:54:00:0d:b3:83}
	I0316 18:11:36.287015  838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | domain old-k8s-version-985498 has defined IP address 192.168.61.233 and MAC address 52:54:00:0d:b3:83 in network mk-old-k8s-version-985498
	I0316 18:11:36.287249  838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .GetSSHKeyPath
	I0316 18:11:36.287468  838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .GetSSHUsername
	I0316 18:11:36.287655  838136 sshutil.go:53] new ssh client: &{IP:192.168.61.233 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18277-781196/.minikube/machines/old-k8s-version-985498/id_rsa Username:docker}
	I0316 18:11:36.395392  838136 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0316 18:11:36.418891  838136 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-985498" to be "Ready" ...
	I0316 18:11:36.491432  838136 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0316 18:11:36.491479  838136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0316 18:11:36.517572  838136 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0316 18:11:36.517605  838136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0316 18:11:36.521428  838136 addons.go:426] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0316 18:11:36.521456  838136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0316 18:11:36.562163  838136 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0316 18:11:36.562207  838136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0316 18:11:36.574387  838136 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0316 18:11:36.579373  838136 addons.go:426] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0316 18:11:36.579406  838136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0316 18:11:36.589252  838136 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0316 18:11:36.632946  838136 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0316 18:11:36.636515  838136 addons.go:426] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0316 18:11:36.636541  838136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0316 18:11:36.734664  838136 addons.go:426] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0316 18:11:36.734698  838136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0316 18:11:36.851243  838136 addons.go:426] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0316 18:11:36.851276  838136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0316 18:11:37.123790  838136 addons.go:426] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0316 18:11:37.123831  838136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0316 18:11:37.211386  838136 addons.go:426] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0316 18:11:37.211428  838136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0316 18:11:37.257734  838136 addons.go:426] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0316 18:11:37.257773  838136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0316 18:11:37.326672  838136 addons.go:426] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0316 18:11:37.326704  838136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0316 18:11:37.428817  838136 node_ready.go:49] node "old-k8s-version-985498" has status "Ready":"True"
	I0316 18:11:37.428860  838136 node_ready.go:38] duration metric: took 1.009919806s for node "old-k8s-version-985498" to be "Ready" ...
	I0316 18:11:37.428875  838136 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0316 18:11:37.449747  838136 pod_ready.go:78] waiting up to 6m0s for pod "coredns-74ff55c5b-p8874" in "kube-system" namespace to be "Ready" ...
	I0316 18:11:37.485833  838136 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0316 18:11:37.593660  838136 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.019186178s)
	I0316 18:11:37.593823  838136 main.go:141] libmachine: Making call to close driver server
	I0316 18:11:37.593879  838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .Close
	I0316 18:11:37.594312  838136 main.go:141] libmachine: Successfully made call to close driver server
	I0316 18:11:37.594382  838136 main.go:141] libmachine: Making call to close connection to plugin binary
	I0316 18:11:37.594398  838136 main.go:141] libmachine: Making call to close driver server
	I0316 18:11:37.594409  838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .Close
	I0316 18:11:37.594317  838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | Closing plugin on server side
	I0316 18:11:37.594720  838136 main.go:141] libmachine: Successfully made call to close driver server
	I0316 18:11:37.594743  838136 main.go:141] libmachine: Making call to close connection to plugin binary
	I0316 18:11:37.603646  838136 main.go:141] libmachine: Making call to close driver server
	I0316 18:11:37.603751  838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .Close
	I0316 18:11:37.604250  838136 main.go:141] libmachine: Successfully made call to close driver server
	I0316 18:11:37.604271  838136 main.go:141] libmachine: Making call to close connection to plugin binary
	I0316 18:11:37.604286  838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | Closing plugin on server side
	I0316 18:11:37.814018  838136 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.224723801s)
	I0316 18:11:37.814092  838136 main.go:141] libmachine: Making call to close driver server
	I0316 18:11:37.814108  838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .Close
	I0316 18:11:37.814160  838136 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.181164264s)
	I0316 18:11:37.814210  838136 main.go:141] libmachine: Making call to close driver server
	I0316 18:11:37.814226  838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .Close
	I0316 18:11:37.814831  838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | Closing plugin on server side
	I0316 18:11:37.814840  838136 main.go:141] libmachine: Successfully made call to close driver server
	I0316 18:11:37.814855  838136 main.go:141] libmachine: Making call to close connection to plugin binary
	I0316 18:11:37.814865  838136 main.go:141] libmachine: Making call to close driver server
	I0316 18:11:37.814875  838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .Close
	I0316 18:11:37.814907  838136 main.go:141] libmachine: Successfully made call to close driver server
	I0316 18:11:37.814929  838136 main.go:141] libmachine: Making call to close connection to plugin binary
	I0316 18:11:37.814938  838136 main.go:141] libmachine: Making call to close driver server
	I0316 18:11:37.814951  838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .Close
	I0316 18:11:37.815318  838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | Closing plugin on server side
	I0316 18:11:37.815352  838136 main.go:141] libmachine: Successfully made call to close driver server
	I0316 18:11:37.815359  838136 main.go:141] libmachine: Making call to close connection to plugin binary
	I0316 18:11:37.815369  838136 addons.go:470] Verifying addon metrics-server=true in "old-k8s-version-985498"
	I0316 18:11:37.815465  838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | Closing plugin on server side
	I0316 18:11:37.815504  838136 main.go:141] libmachine: Successfully made call to close driver server
	I0316 18:11:37.815521  838136 main.go:141] libmachine: Making call to close connection to plugin binary
	I0316 18:11:38.290554  838136 main.go:141] libmachine: Making call to close driver server
	I0316 18:11:38.290594  838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .Close
	I0316 18:11:38.290992  838136 main.go:141] libmachine: Successfully made call to close driver server
	I0316 18:11:38.291014  838136 main.go:141] libmachine: Making call to close connection to plugin binary
	I0316 18:11:38.291021  838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | Closing plugin on server side
	I0316 18:11:38.291029  838136 main.go:141] libmachine: Making call to close driver server
	I0316 18:11:38.291042  838136 main.go:141] libmachine: (old-k8s-version-985498) Calling .Close
	I0316 18:11:38.291316  838136 main.go:141] libmachine: (old-k8s-version-985498) DBG | Closing plugin on server side
	I0316 18:11:38.291358  838136 main.go:141] libmachine: Successfully made call to close driver server
	I0316 18:11:38.291366  838136 main.go:141] libmachine: Making call to close connection to plugin binary
	I0316 18:11:38.293371  838136 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-985498 addons enable metrics-server
	
	I0316 18:11:38.295184  838136 out.go:177] * Enabled addons: default-storageclass, metrics-server, storage-provisioner, dashboard
	I0316 18:11:38.296764  838136 addons.go:505] duration metric: took 2.117899672s for enable addons: enabled=[default-storageclass metrics-server storage-provisioner dashboard]
	I0316 18:11:39.457709  838136 pod_ready.go:102] pod "coredns-74ff55c5b-p8874" in "kube-system" namespace has status "Ready":"False"
	I0316 18:11:41.458339  838136 pod_ready.go:102] pod "coredns-74ff55c5b-p8874" in "kube-system" namespace has status "Ready":"False"
	I0316 18:11:43.957814  838136 pod_ready.go:102] pod "coredns-74ff55c5b-p8874" in "kube-system" namespace has status "Ready":"False"
	I0316 18:11:46.457217  838136 pod_ready.go:102] pod "coredns-74ff55c5b-p8874" in "kube-system" namespace has status "Ready":"False"
	I0316 18:11:48.958059  838136 pod_ready.go:102] pod "coredns-74ff55c5b-p8874" in "kube-system" namespace has status "Ready":"False"
	I0316 18:11:51.458586  838136 pod_ready.go:102] pod "coredns-74ff55c5b-p8874" in "kube-system" namespace has status "Ready":"False"
	I0316 18:11:53.460935  838136 pod_ready.go:102] pod "coredns-74ff55c5b-p8874" in "kube-system" namespace has status "Ready":"False"
	I0316 18:11:55.957540  838136 pod_ready.go:102] pod "coredns-74ff55c5b-p8874" in "kube-system" namespace has status "Ready":"False"
	I0316 18:11:57.958656  838136 pod_ready.go:102] pod "coredns-74ff55c5b-p8874" in "kube-system" namespace has status "Ready":"False"
	I0316 18:12:00.457479  838136 pod_ready.go:102] pod "coredns-74ff55c5b-p8874" in "kube-system" namespace has status "Ready":"False"
	I0316 18:12:02.458161  838136 pod_ready.go:102] pod "coredns-74ff55c5b-p8874" in "kube-system" namespace has status "Ready":"False"
	I0316 18:12:04.458359  838136 pod_ready.go:102] pod "coredns-74ff55c5b-p8874" in "kube-system" namespace has status "Ready":"False"
	I0316 18:12:06.458933  838136 pod_ready.go:102] pod "coredns-74ff55c5b-p8874" in "kube-system" namespace has status "Ready":"False"
	I0316 18:12:08.958770  838136 pod_ready.go:102] pod "coredns-74ff55c5b-p8874" in "kube-system" namespace has status "Ready":"False"
	I0316 18:12:11.457997  838136 pod_ready.go:102] pod "coredns-74ff55c5b-p8874" in "kube-system" namespace has status "Ready":"False"
	I0316 18:12:13.460097  838136 pod_ready.go:102] pod "coredns-74ff55c5b-p8874" in "kube-system" namespace has status "Ready":"False"
	I0316 18:12:15.959052  838136 pod_ready.go:102] pod "coredns-74ff55c5b-p8874" in "kube-system" namespace has status "Ready":"False"
	I0316 18:12:18.456224  838136 pod_ready.go:102] pod "coredns-74ff55c5b-p8874" in "kube-system" namespace has status "Ready":"False"
	I0316 18:12:20.458246  838136 pod_ready.go:102] pod "coredns-74ff55c5b-p8874" in "kube-system" namespace has status "Ready":"False"
	I0316 18:12:22.959403  838136 pod_ready.go:102] pod "coredns-74ff55c5b-p8874" in "kube-system" namespace has status "Ready":"False"
	I0316 18:12:25.457847  838136 pod_ready.go:92] pod "coredns-74ff55c5b-p8874" in "kube-system" namespace has status "Ready":"True"
	I0316 18:12:25.457875  838136 pod_ready.go:81] duration metric: took 48.008087164s for pod "coredns-74ff55c5b-p8874" in "kube-system" namespace to be "Ready" ...
	I0316 18:12:25.457890  838136 pod_ready.go:78] waiting up to 6m0s for pod "etcd-old-k8s-version-985498" in "kube-system" namespace to be "Ready" ...
	I0316 18:12:27.466917  838136 pod_ready.go:102] pod "etcd-old-k8s-version-985498" in "kube-system" namespace has status "Ready":"False"
	I0316 18:12:29.467072  838136 pod_ready.go:102] pod "etcd-old-k8s-version-985498" in "kube-system" namespace has status "Ready":"False"
	I0316 18:12:31.968920  838136 pod_ready.go:102] pod "etcd-old-k8s-version-985498" in "kube-system" namespace has status "Ready":"False"
	I0316 18:12:34.466799  838136 pod_ready.go:102] pod "etcd-old-k8s-version-985498" in "kube-system" namespace has status "Ready":"False"
	I0316 18:12:36.969785  838136 pod_ready.go:102] pod "etcd-old-k8s-version-985498" in "kube-system" namespace has status "Ready":"False"
	I0316 18:12:39.467148  838136 pod_ready.go:102] pod "etcd-old-k8s-version-985498" in "kube-system" namespace has status "Ready":"False"
	I0316 18:12:41.747146  838136 pod_ready.go:102] pod "etcd-old-k8s-version-985498" in "kube-system" namespace has status "Ready":"False"
	I0316 18:12:42.473125  838136 pod_ready.go:92] pod "etcd-old-k8s-version-985498" in "kube-system" namespace has status "Ready":"True"
	I0316 18:12:42.473171  838136 pod_ready.go:81] duration metric: took 17.015273448s for pod "etcd-old-k8s-version-985498" in "kube-system" namespace to be "Ready" ...
	I0316 18:12:42.473192  838136 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-985498" in "kube-system" namespace to be "Ready" ...
	I0316 18:12:42.487303  838136 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-985498" in "kube-system" namespace has status "Ready":"True"
	I0316 18:12:42.487332  838136 pod_ready.go:81] duration metric: took 14.130108ms for pod "kube-apiserver-old-k8s-version-985498" in "kube-system" namespace to be "Ready" ...
	I0316 18:12:42.487343  838136 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-985498" in "kube-system" namespace to be "Ready" ...
	I0316 18:12:44.495468  838136 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-985498" in "kube-system" namespace has status "Ready":"False"
	I0316 18:12:46.496365  838136 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-985498" in "kube-system" namespace has status "Ready":"False"
	I0316 18:12:48.996268  838136 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-985498" in "kube-system" namespace has status "Ready":"False"
	I0316 18:12:50.996939  838136 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-985498" in "kube-system" namespace has status "Ready":"False"
	I0316 18:12:53.495283  838136 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-985498" in "kube-system" namespace has status "Ready":"False"
	I0316 18:12:55.498249  838136 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-985498" in "kube-system" namespace has status "Ready":"False"
	I0316 18:12:57.498739  838136 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-985498" in "kube-system" namespace has status "Ready":"False"
	I0316 18:12:59.997059  838136 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-985498" in "kube-system" namespace has status "Ready":"False"
	I0316 18:13:02.497715  838136 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-985498" in "kube-system" namespace has status "Ready":"False"
	I0316 18:13:04.995808  838136 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-985498" in "kube-system" namespace has status "Ready":"False"
	I0316 18:13:06.997556  838136 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-985498" in "kube-system" namespace has status "Ready":"False"
	I0316 18:13:09.502223  838136 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-985498" in "kube-system" namespace has status "Ready":"False"
	I0316 18:13:11.995230  838136 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-985498" in "kube-system" namespace has status "Ready":"False"
	I0316 18:13:13.996772  838136 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-985498" in "kube-system" namespace has status "Ready":"False"
	I0316 18:13:16.495974  838136 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-985498" in "kube-system" namespace has status "Ready":"False"
	I0316 18:13:18.497399  838136 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-985498" in "kube-system" namespace has status "Ready":"False"
	I0316 18:13:20.998583  838136 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-985498" in "kube-system" namespace has status "Ready":"False"
	I0316 18:13:23.495088  838136 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-985498" in "kube-system" namespace has status "Ready":"False"
	I0316 18:13:25.495023  838136 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-985498" in "kube-system" namespace has status "Ready":"True"
	I0316 18:13:25.495095  838136 pod_ready.go:81] duration metric: took 43.007714174s for pod "kube-controller-manager-old-k8s-version-985498" in "kube-system" namespace to be "Ready" ...
	I0316 18:13:25.495119  838136 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-nvd4k" in "kube-system" namespace to be "Ready" ...
	I0316 18:13:25.503545  838136 pod_ready.go:92] pod "kube-proxy-nvd4k" in "kube-system" namespace has status "Ready":"True"
	I0316 18:13:25.503575  838136 pod_ready.go:81] duration metric: took 8.446901ms for pod "kube-proxy-nvd4k" in "kube-system" namespace to be "Ready" ...
	I0316 18:13:25.503590  838136 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-985498" in "kube-system" namespace to be "Ready" ...
	I0316 18:13:25.511577  838136 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-985498" in "kube-system" namespace has status "Ready":"True"
	I0316 18:13:25.511608  838136 pod_ready.go:81] duration metric: took 8.009557ms for pod "kube-scheduler-old-k8s-version-985498" in "kube-system" namespace to be "Ready" ...
	I0316 18:13:25.511620  838136 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace to be "Ready" ...
	I0316 18:13:27.520914  838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
	I0316 18:13:30.020574  838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
	I0316 18:13:32.520269  838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
	I0316 18:13:35.019671  838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
	I0316 18:13:37.019971  838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
	I0316 18:13:39.520618  838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
	I0316 18:13:42.019996  838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
	I0316 18:13:44.020764  838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
	I0316 18:13:46.519790  838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
	I0316 18:13:49.019724  838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
	I0316 18:13:51.020495  838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
	I0316 18:13:53.521024  838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
	I0316 18:13:56.019898  838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
	I0316 18:13:58.522343  838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
	I0316 18:14:01.018812  838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
	I0316 18:14:03.025763  838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
	I0316 18:14:05.519405  838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
	I0316 18:14:08.020013  838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
	I0316 18:14:10.519614  838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
	I0316 18:14:13.018496  838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
	I0316 18:14:15.021385  838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
	I0316 18:14:17.520865  838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
	I0316 18:14:20.023696  838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
	I0316 18:14:22.518491  838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
	I0316 18:14:24.518823  838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
	I0316 18:14:26.523460  838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
	I0316 18:14:28.527078  838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
	I0316 18:14:31.031993  838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
	I0316 18:14:33.522275  838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
	I0316 18:14:36.022529  838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
	I0316 18:14:38.521717  838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
	I0316 18:14:41.023808  838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
	I0316 18:14:43.520066  838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
	I0316 18:14:45.520182  838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
	I0316 18:14:47.521846  838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
	I0316 18:14:50.020453  838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
	I0316 18:14:52.021556  838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
	I0316 18:14:54.519667  838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
	I0316 18:14:56.520884  838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
	I0316 18:14:58.522239  838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
	I0316 18:15:01.020266  838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
	I0316 18:15:03.022120  838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
	I0316 18:15:05.520447  838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
	I0316 18:15:08.020488  838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
	I0316 18:15:10.518545  838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
	I0316 18:15:12.521483  838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
	I0316 18:15:15.019988  838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
	I0316 18:15:17.022626  838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
	I0316 18:15:19.522676  838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
	I0316 18:15:22.021070  838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
	I0316 18:15:24.021554  838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
	I0316 18:15:26.520510  838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
	I0316 18:15:29.020572  838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
	I0316 18:15:31.526496  838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
	I0316 18:15:34.022016  838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
	I0316 18:15:36.519921  838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
	I0316 18:15:38.520831  838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
	I0316 18:15:40.521307  838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
	I0316 18:15:43.019174  838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
	I0316 18:15:45.021664  838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
	I0316 18:15:47.519600  838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
	I0316 18:15:49.520987  838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
	I0316 18:15:51.522060  838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
	I0316 18:15:54.020471  838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
	I0316 18:15:56.020790  838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
	I0316 18:15:58.021958  838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
	I0316 18:16:00.023149  838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
	I0316 18:16:02.523023  838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
	I0316 18:16:05.021660  838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
	I0316 18:16:07.519158  838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
	I0316 18:16:09.520044  838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
	I0316 18:16:12.020492  838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
	I0316 18:16:14.521457  838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
	I0316 18:16:17.022695  838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
	I0316 18:16:19.621306  838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
	I0316 18:16:22.023069  838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
	I0316 18:16:24.519709  838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
	I0316 18:16:26.520133  838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
	I0316 18:16:28.521538  838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
	I0316 18:16:31.020524  838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
	I0316 18:16:33.520308  838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
	I0316 18:16:36.022479  838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
	I0316 18:16:38.521701  838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
	I0316 18:16:40.523678  838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
	I0316 18:16:43.022492  838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
	I0316 18:16:45.523895  838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
	I0316 18:16:47.524586  838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
	I0316 18:16:50.020159  838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
	I0316 18:16:52.518683  838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
	I0316 18:16:54.520757  838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
	I0316 18:16:56.521392  838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
	I0316 18:16:58.521540  838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
	I0316 18:17:01.019106  838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
	I0316 18:17:03.020683  838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
	I0316 18:17:05.520962  838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
	I0316 18:17:07.521498  838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
	I0316 18:17:10.019748  838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
	I0316 18:17:12.020707  838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
	I0316 18:17:14.519518  838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
	I0316 18:17:16.519651  838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
	I0316 18:17:19.019366  838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
	I0316 18:17:21.019491  838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
	I0316 18:17:23.021112  838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
	I0316 18:17:25.519500  838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
	I0316 18:17:25.519540  838136 pod_ready.go:81] duration metric: took 4m0.007912771s for pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace to be "Ready" ...
	E0316 18:17:25.519551  838136 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0316 18:17:25.519559  838136 pod_ready.go:38] duration metric: took 5m48.09067273s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0316 18:17:25.519577  838136 api_server.go:52] waiting for apiserver process to appear ...
	I0316 18:17:25.519614  838136 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0316 18:17:25.519725  838136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 18:17:25.587023  838136 cri.go:89] found id: "84cebb4cfc43d687983d6d41133a762dda43b9399298c00c44f46847e2f61438"
	I0316 18:17:25.587057  838136 cri.go:89] found id: ""
	I0316 18:17:25.587068  838136 logs.go:276] 1 containers: [84cebb4cfc43d687983d6d41133a762dda43b9399298c00c44f46847e2f61438]
	I0316 18:17:25.587136  838136 ssh_runner.go:195] Run: which crictl
	I0316 18:17:25.593870  838136 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0316 18:17:25.593959  838136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 18:17:25.644646  838136 cri.go:89] found id: "2434210f6c63bec8d2ba7076471915eb02d3219675ee8ac3cab9722cca4f03e9"
	I0316 18:17:25.644677  838136 cri.go:89] found id: ""
	I0316 18:17:25.644687  838136 logs.go:276] 1 containers: [2434210f6c63bec8d2ba7076471915eb02d3219675ee8ac3cab9722cca4f03e9]
	I0316 18:17:25.644751  838136 ssh_runner.go:195] Run: which crictl
	I0316 18:17:25.652161  838136 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0316 18:17:25.652231  838136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 18:17:25.712920  838136 cri.go:89] found id: "61efb30968d2bf3bd0aff15b70ec1a33c3654d61c5164cc2879e18ef21cd1b77"
	I0316 18:17:25.712955  838136 cri.go:89] found id: ""
	I0316 18:17:25.712967  838136 logs.go:276] 1 containers: [61efb30968d2bf3bd0aff15b70ec1a33c3654d61c5164cc2879e18ef21cd1b77]
	I0316 18:17:25.713041  838136 ssh_runner.go:195] Run: which crictl
	I0316 18:17:25.719028  838136 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0316 18:17:25.719136  838136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 18:17:25.773897  838136 cri.go:89] found id: "34b075a6e3dfea5f9806aeb9625651a26b0db86e59f277f6376fd8767fb23b0c"
	I0316 18:17:25.773927  838136 cri.go:89] found id: ""
	I0316 18:17:25.773937  838136 logs.go:276] 1 containers: [34b075a6e3dfea5f9806aeb9625651a26b0db86e59f277f6376fd8767fb23b0c]
	I0316 18:17:25.774002  838136 ssh_runner.go:195] Run: which crictl
	I0316 18:17:25.780138  838136 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0316 18:17:25.780246  838136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 18:17:25.843279  838136 cri.go:89] found id: "d73b58bba35328eea373a801852be747130c9844121cf55bd77643b3531047cd"
	I0316 18:17:25.843309  838136 cri.go:89] found id: ""
	I0316 18:17:25.843317  838136 logs.go:276] 1 containers: [d73b58bba35328eea373a801852be747130c9844121cf55bd77643b3531047cd]
	I0316 18:17:25.843375  838136 ssh_runner.go:195] Run: which crictl
	I0316 18:17:25.848956  838136 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 18:17:25.849060  838136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 18:17:25.899592  838136 cri.go:89] found id: "05061990c3ccf6f330cf21ba541a8be55fca74639e81e4b0d14b30bee51fc554"
	I0316 18:17:25.899624  838136 cri.go:89] found id: "162132fbe06feefe5047b9977675ebb65d90ca0056d9f9a9c6733dda273afd72"
	I0316 18:17:25.899630  838136 cri.go:89] found id: ""
	I0316 18:17:25.899641  838136 logs.go:276] 2 containers: [05061990c3ccf6f330cf21ba541a8be55fca74639e81e4b0d14b30bee51fc554 162132fbe06feefe5047b9977675ebb65d90ca0056d9f9a9c6733dda273afd72]
	I0316 18:17:25.899710  838136 ssh_runner.go:195] Run: which crictl
	I0316 18:17:25.907916  838136 ssh_runner.go:195] Run: which crictl
	I0316 18:17:25.918955  838136 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0316 18:17:25.919046  838136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 18:17:25.971433  838136 cri.go:89] found id: ""
	I0316 18:17:25.971478  838136 logs.go:276] 0 containers: []
	W0316 18:17:25.971490  838136 logs.go:278] No container was found matching "kindnet"
	I0316 18:17:25.971498  838136 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 18:17:25.971572  838136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 18:17:26.021187  838136 cri.go:89] found id: "aba262227c6f69883d13fafc927cfe64d82292e8029ae85f3213b3f2148c23e3"
	I0316 18:17:26.021220  838136 cri.go:89] found id: ""
	I0316 18:17:26.021229  838136 logs.go:276] 1 containers: [aba262227c6f69883d13fafc927cfe64d82292e8029ae85f3213b3f2148c23e3]
	I0316 18:17:26.021296  838136 ssh_runner.go:195] Run: which crictl
	I0316 18:17:26.028046  838136 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0316 18:17:26.028122  838136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0316 18:17:26.086850  838136 cri.go:89] found id: "aa120a5aa0d886b8cd2c321b4b358ee6299f67e9b4a59d4782345a8088bff5c8"
	I0316 18:17:26.086875  838136 cri.go:89] found id: "7ed441150c7335e02b0c6b3ac696c632796c0d1229fc30b38f78d02560c87aa6"
	I0316 18:17:26.086879  838136 cri.go:89] found id: ""
	I0316 18:17:26.086887  838136 logs.go:276] 2 containers: [aa120a5aa0d886b8cd2c321b4b358ee6299f67e9b4a59d4782345a8088bff5c8 7ed441150c7335e02b0c6b3ac696c632796c0d1229fc30b38f78d02560c87aa6]
	I0316 18:17:26.086940  838136 ssh_runner.go:195] Run: which crictl
	I0316 18:17:26.093302  838136 ssh_runner.go:195] Run: which crictl
	I0316 18:17:26.101414  838136 logs.go:123] Gathering logs for etcd [2434210f6c63bec8d2ba7076471915eb02d3219675ee8ac3cab9722cca4f03e9] ...
	I0316 18:17:26.101443  838136 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2434210f6c63bec8d2ba7076471915eb02d3219675ee8ac3cab9722cca4f03e9"
	I0316 18:17:26.171632  838136 logs.go:123] Gathering logs for coredns [61efb30968d2bf3bd0aff15b70ec1a33c3654d61c5164cc2879e18ef21cd1b77] ...
	I0316 18:17:26.171697  838136 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 61efb30968d2bf3bd0aff15b70ec1a33c3654d61c5164cc2879e18ef21cd1b77"
	I0316 18:17:26.219764  838136 logs.go:123] Gathering logs for storage-provisioner [7ed441150c7335e02b0c6b3ac696c632796c0d1229fc30b38f78d02560c87aa6] ...
	I0316 18:17:26.219813  838136 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7ed441150c7335e02b0c6b3ac696c632796c0d1229fc30b38f78d02560c87aa6"
	I0316 18:17:26.281101  838136 logs.go:123] Gathering logs for describe nodes ...
	I0316 18:17:26.281153  838136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0316 18:17:26.484976  838136 logs.go:123] Gathering logs for kube-controller-manager [162132fbe06feefe5047b9977675ebb65d90ca0056d9f9a9c6733dda273afd72] ...
	I0316 18:17:26.485019  838136 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 162132fbe06feefe5047b9977675ebb65d90ca0056d9f9a9c6733dda273afd72"
	I0316 18:17:26.556929  838136 logs.go:123] Gathering logs for container status ...
	I0316 18:17:26.556977  838136 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 18:17:26.609552  838136 logs.go:123] Gathering logs for storage-provisioner [aa120a5aa0d886b8cd2c321b4b358ee6299f67e9b4a59d4782345a8088bff5c8] ...
	I0316 18:17:26.609594  838136 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aa120a5aa0d886b8cd2c321b4b358ee6299f67e9b4a59d4782345a8088bff5c8"
	I0316 18:17:26.656257  838136 logs.go:123] Gathering logs for kubelet ...
	I0316 18:17:26.656294  838136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0316 18:17:26.698787  838136 logs.go:138] Found kubelet problem: Mar 16 18:11:24 old-k8s-version-985498 kubelet[888]: E0316 18:11:24.452217     888 pod_workers.go:191] Error syncing pod f8d3d61ad8d45c80ab92bcedbe7fdb7d ("kube-controller-manager-old-k8s-version-985498_kube-system(f8d3d61ad8d45c80ab92bcedbe7fdb7d)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CreateContainerError: "failed to create containerd container: failed to rename: rename /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/new-210505493 /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/35: file exists"
	W0316 18:17:26.703383  838136 logs.go:138] Found kubelet problem: Mar 16 18:11:27 old-k8s-version-985498 kubelet[888]: E0316 18:11:27.530957     888 pod_workers.go:191] Error syncing pod 31a485c797dc9b239357ad3b694dc86e ("kube-apiserver-old-k8s-version-985498_kube-system(31a485c797dc9b239357ad3b694dc86e)"), skipping: failed to "StartContainer" for "kube-apiserver" with CreateContainerError: "failed to create containerd container: failed to rename: rename /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/new-3710715184 /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/36: file exists"
	W0316 18:17:26.705326  838136 logs.go:138] Found kubelet problem: Mar 16 18:11:29 old-k8s-version-985498 kubelet[888]: E0316 18:11:29.589592     888 pod_workers.go:191] Error syncing pod f8d3d61ad8d45c80ab92bcedbe7fdb7d ("kube-controller-manager-old-k8s-version-985498_kube-system(f8d3d61ad8d45c80ab92bcedbe7fdb7d)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-old-k8s-version-985498_kube-system(f8d3d61ad8d45c80ab92bcedbe7fdb7d)"
	W0316 18:17:26.708845  838136 logs.go:138] Found kubelet problem: Mar 16 18:11:33 old-k8s-version-985498 kubelet[888]: E0316 18:11:33.774758     888 pod_workers.go:191] Error syncing pod f8d3d61ad8d45c80ab92bcedbe7fdb7d ("kube-controller-manager-old-k8s-version-985498_kube-system(f8d3d61ad8d45c80ab92bcedbe7fdb7d)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-old-k8s-version-985498_kube-system(f8d3d61ad8d45c80ab92bcedbe7fdb7d)"
	W0316 18:17:26.713784  838136 logs.go:138] Found kubelet problem: Mar 16 18:11:34 old-k8s-version-985498 kubelet[888]: E0316 18:11:34.296039     888 pod_workers.go:191] Error syncing pod d89b271f-838a-4592-b128-fcb2a06fc5e9 ("storage-provisioner_kube-system(d89b271f-838a-4592-b128-fcb2a06fc5e9)"), skipping: failed to "StartContainer" for "storage-provisioner" with CreateContainerError: "failed to create containerd container: failed to rename: rename /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/new-1431217611 /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/38: file exists"
	W0316 18:17:26.719803  838136 logs.go:138] Found kubelet problem: Mar 16 18:11:37 old-k8s-version-985498 kubelet[888]: E0316 18:11:37.840851     888 pod_workers.go:191] Error syncing pod ba5c6fa2-191f-4ae2-8aee-b1075a50b37b ("metrics-server-9975d5f86-xqhk9_kube-system(ba5c6fa2-191f-4ae2-8aee-b1075a50b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host"
	W0316 18:17:26.719947  838136 logs.go:138] Found kubelet problem: Mar 16 18:11:38 old-k8s-version-985498 kubelet[888]: E0316 18:11:38.487672     888 pod_workers.go:191] Error syncing pod ba5c6fa2-191f-4ae2-8aee-b1075a50b37b ("metrics-server-9975d5f86-xqhk9_kube-system(ba5c6fa2-191f-4ae2-8aee-b1075a50b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0316 18:17:26.721883  838136 logs.go:138] Found kubelet problem: Mar 16 18:11:48 old-k8s-version-985498 kubelet[888]: E0316 18:11:48.375825     888 pod_workers.go:191] Error syncing pod f8d3d61ad8d45c80ab92bcedbe7fdb7d ("kube-controller-manager-old-k8s-version-985498_kube-system(f8d3d61ad8d45c80ab92bcedbe7fdb7d)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CreateContainerError: "failed to create containerd container: failed to rename: rename /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/new-1993581407 /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/41: file exists"
	W0316 18:17:26.723186  838136 logs.go:138] Found kubelet problem: Mar 16 18:11:48 old-k8s-version-985498 kubelet[888]: E0316 18:11:48.539670     888 pod_workers.go:191] Error syncing pod daf8607f-2ff3-4d80-b1ed-ca2d24cb6b36 ("kube-proxy-nvd4k_kube-system(daf8607f-2ff3-4d80-b1ed-ca2d24cb6b36)"), skipping: failed to "StartContainer" for "kube-proxy" with CreateContainerError: "failed to create containerd container: failed to rename: rename /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/new-2911645386 /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/42: file exists"
	W0316 18:17:26.725902  838136 logs.go:138] Found kubelet problem: Mar 16 18:11:50 old-k8s-version-985498 kubelet[888]: E0316 18:11:50.493127     888 pod_workers.go:191] Error syncing pod ba5c6fa2-191f-4ae2-8aee-b1075a50b37b ("metrics-server-9975d5f86-xqhk9_kube-system(ba5c6fa2-191f-4ae2-8aee-b1075a50b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host"
	W0316 18:17:26.727816  838136 logs.go:138] Found kubelet problem: Mar 16 18:12:01 old-k8s-version-985498 kubelet[888]: E0316 18:12:01.388860     888 pod_workers.go:191] Error syncing pod f8d3d61ad8d45c80ab92bcedbe7fdb7d ("kube-controller-manager-old-k8s-version-985498_kube-system(f8d3d61ad8d45c80ab92bcedbe7fdb7d)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CreateContainerError: "failed to create containerd container: failed to rename: rename /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/new-2375308116 /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/44: file exists"
	W0316 18:17:26.727957  838136 logs.go:138] Found kubelet problem: Mar 16 18:12:02 old-k8s-version-985498 kubelet[888]: E0316 18:12:02.347425     888 pod_workers.go:191] Error syncing pod ba5c6fa2-191f-4ae2-8aee-b1075a50b37b ("metrics-server-9975d5f86-xqhk9_kube-system(ba5c6fa2-191f-4ae2-8aee-b1075a50b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0316 18:17:26.729296  838136 logs.go:138] Found kubelet problem: Mar 16 18:12:04 old-k8s-version-985498 kubelet[888]: E0316 18:12:04.759315     888 pod_workers.go:191] Error syncing pod 9d1a1153-d964-4893-aae0-6b926755edf4 ("busybox_default(9d1a1153-d964-4893-aae0-6b926755edf4)"), skipping: failed to "StartContainer" for "busybox" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\": failed to prepare extraction snapshot \"extract-753167480-EI9m sha256:e49dd1e534d9df22f1c5041581eaeb3f23fc6ef51ac5a4963ab35adc8f056f65\": failed to rename: rename /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/new-2174206111 /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/45: file exists"
	W0316 18:17:26.729513  838136 logs.go:138] Found kubelet problem: Mar 16 18:12:05 old-k8s-version-985498 kubelet[888]: E0316 18:12:05.583630     888 pod_workers.go:191] Error syncing pod 9d1a1153-d964-4893-aae0-6b926755edf4 ("busybox_default(9d1a1153-d964-4893-aae0-6b926755edf4)"), skipping: failed to "StartContainer" for "busybox" with ImagePullBackOff: "Back-off pulling image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	W0316 18:17:26.731335  838136 logs.go:138] Found kubelet problem: Mar 16 18:12:17 old-k8s-version-985498 kubelet[888]: E0316 18:12:17.365731     888 pod_workers.go:191] Error syncing pod ba5c6fa2-191f-4ae2-8aee-b1075a50b37b ("metrics-server-9975d5f86-xqhk9_kube-system(ba5c6fa2-191f-4ae2-8aee-b1075a50b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host"
	W0316 18:17:26.732305  838136 logs.go:138] Found kubelet problem: Mar 16 18:12:31 old-k8s-version-985498 kubelet[888]: E0316 18:12:31.362316     888 pod_workers.go:191] Error syncing pod ba5c6fa2-191f-4ae2-8aee-b1075a50b37b ("metrics-server-9975d5f86-xqhk9_kube-system(ba5c6fa2-191f-4ae2-8aee-b1075a50b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0316 18:17:26.732729  838136 logs.go:138] Found kubelet problem: Mar 16 18:12:38 old-k8s-version-985498 kubelet[888]: E0316 18:12:38.782628     888 pod_workers.go:191] Error syncing pod c592d222-aa6b-4b6a-ad65-450b94be4b65 ("dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"
	W0316 18:17:26.732969  838136 logs.go:138] Found kubelet problem: Mar 16 18:12:39 old-k8s-version-985498 kubelet[888]: E0316 18:12:39.791862     888 pod_workers.go:191] Error syncing pod c592d222-aa6b-4b6a-ad65-450b94be4b65 ("dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"
	W0316 18:17:26.733111  838136 logs.go:138] Found kubelet problem: Mar 16 18:12:43 old-k8s-version-985498 kubelet[888]: E0316 18:12:43.348091     888 pod_workers.go:191] Error syncing pod ba5c6fa2-191f-4ae2-8aee-b1075a50b37b ("metrics-server-9975d5f86-xqhk9_kube-system(ba5c6fa2-191f-4ae2-8aee-b1075a50b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0316 18:17:26.733346  838136 logs.go:138] Found kubelet problem: Mar 16 18:12:46 old-k8s-version-985498 kubelet[888]: E0316 18:12:46.689033     888 pod_workers.go:191] Error syncing pod c592d222-aa6b-4b6a-ad65-450b94be4b65 ("dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"
	W0316 18:17:26.735058  838136 logs.go:138] Found kubelet problem: Mar 16 18:12:58 old-k8s-version-985498 kubelet[888]: E0316 18:12:58.404260     888 pod_workers.go:191] Error syncing pod ba5c6fa2-191f-4ae2-8aee-b1075a50b37b ("metrics-server-9975d5f86-xqhk9_kube-system(ba5c6fa2-191f-4ae2-8aee-b1075a50b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host"
	W0316 18:17:26.735490  838136 logs.go:138] Found kubelet problem: Mar 16 18:13:02 old-k8s-version-985498 kubelet[888]: E0316 18:13:02.883259     888 pod_workers.go:191] Error syncing pod c592d222-aa6b-4b6a-ad65-450b94be4b65 ("dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"
	W0316 18:17:26.735729  838136 logs.go:138] Found kubelet problem: Mar 16 18:13:06 old-k8s-version-985498 kubelet[888]: E0316 18:13:06.689066     888 pod_workers.go:191] Error syncing pod c592d222-aa6b-4b6a-ad65-450b94be4b65 ("dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"
	W0316 18:17:26.735866  838136 logs.go:138] Found kubelet problem: Mar 16 18:13:11 old-k8s-version-985498 kubelet[888]: E0316 18:13:11.347423     888 pod_workers.go:191] Error syncing pod ba5c6fa2-191f-4ae2-8aee-b1075a50b37b ("metrics-server-9975d5f86-xqhk9_kube-system(ba5c6fa2-191f-4ae2-8aee-b1075a50b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0316 18:17:26.736102  838136 logs.go:138] Found kubelet problem: Mar 16 18:13:20 old-k8s-version-985498 kubelet[888]: E0316 18:13:20.346818     888 pod_workers.go:191] Error syncing pod c592d222-aa6b-4b6a-ad65-450b94be4b65 ("dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"
	W0316 18:17:26.736237  838136 logs.go:138] Found kubelet problem: Mar 16 18:13:22 old-k8s-version-985498 kubelet[888]: E0316 18:13:22.349160     888 pod_workers.go:191] Error syncing pod ba5c6fa2-191f-4ae2-8aee-b1075a50b37b ("metrics-server-9975d5f86-xqhk9_kube-system(ba5c6fa2-191f-4ae2-8aee-b1075a50b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0316 18:17:26.736374  838136 logs.go:138] Found kubelet problem: Mar 16 18:13:34 old-k8s-version-985498 kubelet[888]: E0316 18:13:34.347075     888 pod_workers.go:191] Error syncing pod ba5c6fa2-191f-4ae2-8aee-b1075a50b37b ("metrics-server-9975d5f86-xqhk9_kube-system(ba5c6fa2-191f-4ae2-8aee-b1075a50b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0316 18:17:26.736801  838136 logs.go:138] Found kubelet problem: Mar 16 18:13:36 old-k8s-version-985498 kubelet[888]: E0316 18:13:36.006325     888 pod_workers.go:191] Error syncing pod c592d222-aa6b-4b6a-ad65-450b94be4b65 ("dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"
	W0316 18:17:26.737037  838136 logs.go:138] Found kubelet problem: Mar 16 18:13:37 old-k8s-version-985498 kubelet[888]: E0316 18:13:37.013902     888 pod_workers.go:191] Error syncing pod c592d222-aa6b-4b6a-ad65-450b94be4b65 ("dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"
	W0316 18:17:26.737173  838136 logs.go:138] Found kubelet problem: Mar 16 18:13:46 old-k8s-version-985498 kubelet[888]: E0316 18:13:46.347475     888 pod_workers.go:191] Error syncing pod ba5c6fa2-191f-4ae2-8aee-b1075a50b37b ("metrics-server-9975d5f86-xqhk9_kube-system(ba5c6fa2-191f-4ae2-8aee-b1075a50b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0316 18:17:26.737421  838136 logs.go:138] Found kubelet problem: Mar 16 18:13:51 old-k8s-version-985498 kubelet[888]: E0316 18:13:51.347194     888 pod_workers.go:191] Error syncing pod c592d222-aa6b-4b6a-ad65-450b94be4b65 ("dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"
	W0316 18:17:26.737556  838136 logs.go:138] Found kubelet problem: Mar 16 18:13:58 old-k8s-version-985498 kubelet[888]: E0316 18:13:58.348592     888 pod_workers.go:191] Error syncing pod ba5c6fa2-191f-4ae2-8aee-b1075a50b37b ("metrics-server-9975d5f86-xqhk9_kube-system(ba5c6fa2-191f-4ae2-8aee-b1075a50b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0316 18:17:26.737794  838136 logs.go:138] Found kubelet problem: Mar 16 18:14:03 old-k8s-version-985498 kubelet[888]: E0316 18:14:03.346460     888 pod_workers.go:191] Error syncing pod c592d222-aa6b-4b6a-ad65-450b94be4b65 ("dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"
	W0316 18:17:26.737933  838136 logs.go:138] Found kubelet problem: Mar 16 18:14:09 old-k8s-version-985498 kubelet[888]: E0316 18:14:09.347794     888 pod_workers.go:191] Error syncing pod ba5c6fa2-191f-4ae2-8aee-b1075a50b37b ("metrics-server-9975d5f86-xqhk9_kube-system(ba5c6fa2-191f-4ae2-8aee-b1075a50b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0316 18:17:26.738169  838136 logs.go:138] Found kubelet problem: Mar 16 18:14:15 old-k8s-version-985498 kubelet[888]: E0316 18:14:15.348212     888 pod_workers.go:191] Error syncing pod c592d222-aa6b-4b6a-ad65-450b94be4b65 ("dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"
	W0316 18:17:26.739915  838136 logs.go:138] Found kubelet problem: Mar 16 18:14:21 old-k8s-version-985498 kubelet[888]: E0316 18:14:21.360852     888 pod_workers.go:191] Error syncing pod ba5c6fa2-191f-4ae2-8aee-b1075a50b37b ("metrics-server-9975d5f86-xqhk9_kube-system(ba5c6fa2-191f-4ae2-8aee-b1075a50b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host"
	W0316 18:17:26.740357  838136 logs.go:138] Found kubelet problem: Mar 16 18:14:29 old-k8s-version-985498 kubelet[888]: E0316 18:14:29.175538     888 pod_workers.go:191] Error syncing pod c592d222-aa6b-4b6a-ad65-450b94be4b65 ("dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"
	W0316 18:17:26.740493  838136 logs.go:138] Found kubelet problem: Mar 16 18:14:32 old-k8s-version-985498 kubelet[888]: E0316 18:14:32.348500     888 pod_workers.go:191] Error syncing pod ba5c6fa2-191f-4ae2-8aee-b1075a50b37b ("metrics-server-9975d5f86-xqhk9_kube-system(ba5c6fa2-191f-4ae2-8aee-b1075a50b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0316 18:17:26.740728  838136 logs.go:138] Found kubelet problem: Mar 16 18:14:36 old-k8s-version-985498 kubelet[888]: E0316 18:14:36.689558     888 pod_workers.go:191] Error syncing pod c592d222-aa6b-4b6a-ad65-450b94be4b65 ("dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"
	W0316 18:17:26.740867  838136 logs.go:138] Found kubelet problem: Mar 16 18:14:46 old-k8s-version-985498 kubelet[888]: E0316 18:14:46.348058     888 pod_workers.go:191] Error syncing pod ba5c6fa2-191f-4ae2-8aee-b1075a50b37b ("metrics-server-9975d5f86-xqhk9_kube-system(ba5c6fa2-191f-4ae2-8aee-b1075a50b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0316 18:17:26.741102  838136 logs.go:138] Found kubelet problem: Mar 16 18:14:49 old-k8s-version-985498 kubelet[888]: E0316 18:14:49.347315     888 pod_workers.go:191] Error syncing pod c592d222-aa6b-4b6a-ad65-450b94be4b65 ("dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"
	W0316 18:17:26.741235  838136 logs.go:138] Found kubelet problem: Mar 16 18:14:57 old-k8s-version-985498 kubelet[888]: E0316 18:14:57.349480     888 pod_workers.go:191] Error syncing pod ba5c6fa2-191f-4ae2-8aee-b1075a50b37b ("metrics-server-9975d5f86-xqhk9_kube-system(ba5c6fa2-191f-4ae2-8aee-b1075a50b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0316 18:17:26.741471  838136 logs.go:138] Found kubelet problem: Mar 16 18:15:03 old-k8s-version-985498 kubelet[888]: E0316 18:15:03.346815     888 pod_workers.go:191] Error syncing pod c592d222-aa6b-4b6a-ad65-450b94be4b65 ("dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"
	W0316 18:17:26.741606  838136 logs.go:138] Found kubelet problem: Mar 16 18:15:10 old-k8s-version-985498 kubelet[888]: E0316 18:15:10.347187     888 pod_workers.go:191] Error syncing pod ba5c6fa2-191f-4ae2-8aee-b1075a50b37b ("metrics-server-9975d5f86-xqhk9_kube-system(ba5c6fa2-191f-4ae2-8aee-b1075a50b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0316 18:17:26.741845  838136 logs.go:138] Found kubelet problem: Mar 16 18:15:18 old-k8s-version-985498 kubelet[888]: E0316 18:15:18.346934     888 pod_workers.go:191] Error syncing pod c592d222-aa6b-4b6a-ad65-450b94be4b65 ("dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"
	W0316 18:17:26.741980  838136 logs.go:138] Found kubelet problem: Mar 16 18:15:25 old-k8s-version-985498 kubelet[888]: E0316 18:15:25.347491     888 pod_workers.go:191] Error syncing pod ba5c6fa2-191f-4ae2-8aee-b1075a50b37b ("metrics-server-9975d5f86-xqhk9_kube-system(ba5c6fa2-191f-4ae2-8aee-b1075a50b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0316 18:17:26.742249  838136 logs.go:138] Found kubelet problem: Mar 16 18:15:29 old-k8s-version-985498 kubelet[888]: E0316 18:15:29.347101     888 pod_workers.go:191] Error syncing pod c592d222-aa6b-4b6a-ad65-450b94be4b65 ("dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"
	W0316 18:17:26.742385  838136 logs.go:138] Found kubelet problem: Mar 16 18:15:39 old-k8s-version-985498 kubelet[888]: E0316 18:15:39.347176     888 pod_workers.go:191] Error syncing pod ba5c6fa2-191f-4ae2-8aee-b1075a50b37b ("metrics-server-9975d5f86-xqhk9_kube-system(ba5c6fa2-191f-4ae2-8aee-b1075a50b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0316 18:17:26.742620  838136 logs.go:138] Found kubelet problem: Mar 16 18:15:42 old-k8s-version-985498 kubelet[888]: E0316 18:15:42.347133     888 pod_workers.go:191] Error syncing pod c592d222-aa6b-4b6a-ad65-450b94be4b65 ("dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"
	W0316 18:17:26.742754  838136 logs.go:138] Found kubelet problem: Mar 16 18:15:50 old-k8s-version-985498 kubelet[888]: E0316 18:15:50.348255     888 pod_workers.go:191] Error syncing pod ba5c6fa2-191f-4ae2-8aee-b1075a50b37b ("metrics-server-9975d5f86-xqhk9_kube-system(ba5c6fa2-191f-4ae2-8aee-b1075a50b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0316 18:17:26.743180  838136 logs.go:138] Found kubelet problem: Mar 16 18:15:58 old-k8s-version-985498 kubelet[888]: E0316 18:15:58.519929     888 pod_workers.go:191] Error syncing pod c592d222-aa6b-4b6a-ad65-450b94be4b65 ("dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"
	W0316 18:17:26.743316  838136 logs.go:138] Found kubelet problem: Mar 16 18:16:03 old-k8s-version-985498 kubelet[888]: E0316 18:16:03.347044     888 pod_workers.go:191] Error syncing pod ba5c6fa2-191f-4ae2-8aee-b1075a50b37b ("metrics-server-9975d5f86-xqhk9_kube-system(ba5c6fa2-191f-4ae2-8aee-b1075a50b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0316 18:17:26.743562  838136 logs.go:138] Found kubelet problem: Mar 16 18:16:06 old-k8s-version-985498 kubelet[888]: E0316 18:16:06.689281     888 pod_workers.go:191] Error syncing pod c592d222-aa6b-4b6a-ad65-450b94be4b65 ("dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"
	W0316 18:17:26.743697  838136 logs.go:138] Found kubelet problem: Mar 16 18:16:17 old-k8s-version-985498 kubelet[888]: E0316 18:16:17.347194     888 pod_workers.go:191] Error syncing pod ba5c6fa2-191f-4ae2-8aee-b1075a50b37b ("metrics-server-9975d5f86-xqhk9_kube-system(ba5c6fa2-191f-4ae2-8aee-b1075a50b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0316 18:17:26.743937  838136 logs.go:138] Found kubelet problem: Mar 16 18:16:19 old-k8s-version-985498 kubelet[888]: E0316 18:16:19.346699     888 pod_workers.go:191] Error syncing pod c592d222-aa6b-4b6a-ad65-450b94be4b65 ("dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"
	W0316 18:17:26.744072  838136 logs.go:138] Found kubelet problem: Mar 16 18:16:30 old-k8s-version-985498 kubelet[888]: E0316 18:16:30.348163     888 pod_workers.go:191] Error syncing pod ba5c6fa2-191f-4ae2-8aee-b1075a50b37b ("metrics-server-9975d5f86-xqhk9_kube-system(ba5c6fa2-191f-4ae2-8aee-b1075a50b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0316 18:17:26.744308  838136 logs.go:138] Found kubelet problem: Mar 16 18:16:34 old-k8s-version-985498 kubelet[888]: E0316 18:16:34.346242     888 pod_workers.go:191] Error syncing pod c592d222-aa6b-4b6a-ad65-450b94be4b65 ("dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"
	W0316 18:17:26.744441  838136 logs.go:138] Found kubelet problem: Mar 16 18:16:41 old-k8s-version-985498 kubelet[888]: E0316 18:16:41.347306     888 pod_workers.go:191] Error syncing pod ba5c6fa2-191f-4ae2-8aee-b1075a50b37b ("metrics-server-9975d5f86-xqhk9_kube-system(ba5c6fa2-191f-4ae2-8aee-b1075a50b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0316 18:17:26.744677  838136 logs.go:138] Found kubelet problem: Mar 16 18:16:49 old-k8s-version-985498 kubelet[888]: E0316 18:16:49.347088     888 pod_workers.go:191] Error syncing pod c592d222-aa6b-4b6a-ad65-450b94be4b65 ("dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"
	W0316 18:17:26.744816  838136 logs.go:138] Found kubelet problem: Mar 16 18:16:56 old-k8s-version-985498 kubelet[888]: E0316 18:16:56.347531     888 pod_workers.go:191] Error syncing pod ba5c6fa2-191f-4ae2-8aee-b1075a50b37b ("metrics-server-9975d5f86-xqhk9_kube-system(ba5c6fa2-191f-4ae2-8aee-b1075a50b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0316 18:17:26.745050  838136 logs.go:138] Found kubelet problem: Mar 16 18:17:01 old-k8s-version-985498 kubelet[888]: E0316 18:17:01.346320     888 pod_workers.go:191] Error syncing pod c592d222-aa6b-4b6a-ad65-450b94be4b65 ("dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"
	W0316 18:17:26.746768  838136 logs.go:138] Found kubelet problem: Mar 16 18:17:08 old-k8s-version-985498 kubelet[888]: E0316 18:17:08.362954     888 pod_workers.go:191] Error syncing pod ba5c6fa2-191f-4ae2-8aee-b1075a50b37b ("metrics-server-9975d5f86-xqhk9_kube-system(ba5c6fa2-191f-4ae2-8aee-b1075a50b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host"
	W0316 18:17:26.747010  838136 logs.go:138] Found kubelet problem: Mar 16 18:17:16 old-k8s-version-985498 kubelet[888]: E0316 18:17:16.346879     888 pod_workers.go:191] Error syncing pod c592d222-aa6b-4b6a-ad65-450b94be4b65 ("dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"
	W0316 18:17:26.747145  838136 logs.go:138] Found kubelet problem: Mar 16 18:17:23 old-k8s-version-985498 kubelet[888]: E0316 18:17:23.347609     888 pod_workers.go:191] Error syncing pod ba5c6fa2-191f-4ae2-8aee-b1075a50b37b ("metrics-server-9975d5f86-xqhk9_kube-system(ba5c6fa2-191f-4ae2-8aee-b1075a50b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0316 18:17:26.747156  838136 logs.go:123] Gathering logs for dmesg ...
	I0316 18:17:26.747172  838136 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 18:17:26.766207  838136 logs.go:123] Gathering logs for kube-scheduler [34b075a6e3dfea5f9806aeb9625651a26b0db86e59f277f6376fd8767fb23b0c] ...
	I0316 18:17:26.766251  838136 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 34b075a6e3dfea5f9806aeb9625651a26b0db86e59f277f6376fd8767fb23b0c"
	I0316 18:17:26.823871  838136 logs.go:123] Gathering logs for kube-proxy [d73b58bba35328eea373a801852be747130c9844121cf55bd77643b3531047cd] ...
	I0316 18:17:26.823920  838136 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d73b58bba35328eea373a801852be747130c9844121cf55bd77643b3531047cd"
	I0316 18:17:26.870843  838136 logs.go:123] Gathering logs for kube-controller-manager [05061990c3ccf6f330cf21ba541a8be55fca74639e81e4b0d14b30bee51fc554] ...
	I0316 18:17:26.870883  838136 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 05061990c3ccf6f330cf21ba541a8be55fca74639e81e4b0d14b30bee51fc554"
	I0316 18:17:26.940409  838136 logs.go:123] Gathering logs for kubernetes-dashboard [aba262227c6f69883d13fafc927cfe64d82292e8029ae85f3213b3f2148c23e3] ...
	I0316 18:17:26.940460  838136 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aba262227c6f69883d13fafc927cfe64d82292e8029ae85f3213b3f2148c23e3"
	I0316 18:17:26.987147  838136 logs.go:123] Gathering logs for kube-apiserver [84cebb4cfc43d687983d6d41133a762dda43b9399298c00c44f46847e2f61438] ...
	I0316 18:17:26.987189  838136 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84cebb4cfc43d687983d6d41133a762dda43b9399298c00c44f46847e2f61438"
	I0316 18:17:27.062021  838136 logs.go:123] Gathering logs for containerd ...
	I0316 18:17:27.062071  838136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0316 18:17:27.136063  838136 out.go:304] Setting ErrFile to fd 2...
	I0316 18:17:27.136101  838136 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0316 18:17:27.136179  838136 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0316 18:17:27.136198  838136 out.go:239]   Mar 16 18:16:56 old-k8s-version-985498 kubelet[888]: E0316 18:16:56.347531     888 pod_workers.go:191] Error syncing pod ba5c6fa2-191f-4ae2-8aee-b1075a50b37b ("metrics-server-9975d5f86-xqhk9_kube-system(ba5c6fa2-191f-4ae2-8aee-b1075a50b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Mar 16 18:16:56 old-k8s-version-985498 kubelet[888]: E0316 18:16:56.347531     888 pod_workers.go:191] Error syncing pod ba5c6fa2-191f-4ae2-8aee-b1075a50b37b ("metrics-server-9975d5f86-xqhk9_kube-system(ba5c6fa2-191f-4ae2-8aee-b1075a50b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0316 18:17:27.136211  838136 out.go:239]   Mar 16 18:17:01 old-k8s-version-985498 kubelet[888]: E0316 18:17:01.346320     888 pod_workers.go:191] Error syncing pod c592d222-aa6b-4b6a-ad65-450b94be4b65 ("dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"
	  Mar 16 18:17:01 old-k8s-version-985498 kubelet[888]: E0316 18:17:01.346320     888 pod_workers.go:191] Error syncing pod c592d222-aa6b-4b6a-ad65-450b94be4b65 ("dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"
	W0316 18:17:27.136229  838136 out.go:239]   Mar 16 18:17:08 old-k8s-version-985498 kubelet[888]: E0316 18:17:08.362954     888 pod_workers.go:191] Error syncing pod ba5c6fa2-191f-4ae2-8aee-b1075a50b37b ("metrics-server-9975d5f86-xqhk9_kube-system(ba5c6fa2-191f-4ae2-8aee-b1075a50b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host"
	  Mar 16 18:17:08 old-k8s-version-985498 kubelet[888]: E0316 18:17:08.362954     888 pod_workers.go:191] Error syncing pod ba5c6fa2-191f-4ae2-8aee-b1075a50b37b ("metrics-server-9975d5f86-xqhk9_kube-system(ba5c6fa2-191f-4ae2-8aee-b1075a50b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host"
	W0316 18:17:27.136246  838136 out.go:239]   Mar 16 18:17:16 old-k8s-version-985498 kubelet[888]: E0316 18:17:16.346879     888 pod_workers.go:191] Error syncing pod c592d222-aa6b-4b6a-ad65-450b94be4b65 ("dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"
	  Mar 16 18:17:16 old-k8s-version-985498 kubelet[888]: E0316 18:17:16.346879     888 pod_workers.go:191] Error syncing pod c592d222-aa6b-4b6a-ad65-450b94be4b65 ("dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"
	W0316 18:17:27.136263  838136 out.go:239]   Mar 16 18:17:23 old-k8s-version-985498 kubelet[888]: E0316 18:17:23.347609     888 pod_workers.go:191] Error syncing pod ba5c6fa2-191f-4ae2-8aee-b1075a50b37b ("metrics-server-9975d5f86-xqhk9_kube-system(ba5c6fa2-191f-4ae2-8aee-b1075a50b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Mar 16 18:17:23 old-k8s-version-985498 kubelet[888]: E0316 18:17:23.347609     888 pod_workers.go:191] Error syncing pod ba5c6fa2-191f-4ae2-8aee-b1075a50b37b ("metrics-server-9975d5f86-xqhk9_kube-system(ba5c6fa2-191f-4ae2-8aee-b1075a50b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0316 18:17:27.136276  838136 out.go:304] Setting ErrFile to fd 2...
	I0316 18:17:27.136283  838136 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0316 18:17:37.137763  838136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 18:17:37.159011  838136 api_server.go:72] duration metric: took 6m0.980190849s to wait for apiserver process to appear ...
	I0316 18:17:37.159048  838136 api_server.go:88] waiting for apiserver healthz status ...
	I0316 18:17:37.161262  838136 out.go:177] 
	W0316 18:17:37.162843  838136 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: cluster wait timed out during healthz check
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: cluster wait timed out during healthz check
	W0316 18:17:37.162874  838136 out.go:239] * 
	* 
	W0316 18:17:37.163764  838136 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0316 18:17:37.165696  838136 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-985498 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-985498 -n old-k8s-version-985498
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-985498 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-985498 logs -n 25: (1.563416581s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| image   | embed-certs-831781 image list                          | embed-certs-831781           | jenkins | v1.32.0 | 16 Mar 24 18:15 UTC | 16 Mar 24 18:15 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p embed-certs-831781                                  | embed-certs-831781           | jenkins | v1.32.0 | 16 Mar 24 18:15 UTC | 16 Mar 24 18:15 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p embed-certs-831781                                  | embed-certs-831781           | jenkins | v1.32.0 | 16 Mar 24 18:15 UTC | 16 Mar 24 18:15 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p embed-certs-831781                                  | embed-certs-831781           | jenkins | v1.32.0 | 16 Mar 24 18:15 UTC | 16 Mar 24 18:15 UTC |
	| delete  | -p embed-certs-831781                                  | embed-certs-831781           | jenkins | v1.32.0 | 16 Mar 24 18:15 UTC | 16 Mar 24 18:15 UTC |
	| start   | -p newest-cni-993416 --memory=2200 --alsologtostderr   | newest-cni-993416            | jenkins | v1.32.0 | 16 Mar 24 18:15 UTC | 16 Mar 24 18:16 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=containerd          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| image   | no-preload-738074 image list                           | no-preload-738074            | jenkins | v1.32.0 | 16 Mar 24 18:16 UTC | 16 Mar 24 18:16 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p no-preload-738074                                   | no-preload-738074            | jenkins | v1.32.0 | 16 Mar 24 18:16 UTC | 16 Mar 24 18:16 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p no-preload-738074                                   | no-preload-738074            | jenkins | v1.32.0 | 16 Mar 24 18:16 UTC | 16 Mar 24 18:16 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p no-preload-738074                                   | no-preload-738074            | jenkins | v1.32.0 | 16 Mar 24 18:16 UTC | 16 Mar 24 18:16 UTC |
	| delete  | -p no-preload-738074                                   | no-preload-738074            | jenkins | v1.32.0 | 16 Mar 24 18:16 UTC | 16 Mar 24 18:16 UTC |
	| image   | default-k8s-diff-port-683490                           | default-k8s-diff-port-683490 | jenkins | v1.32.0 | 16 Mar 24 18:16 UTC | 16 Mar 24 18:16 UTC |
	|         | image list --format=json                               |                              |         |         |                     |                     |
	| pause   | -p                                                     | default-k8s-diff-port-683490 | jenkins | v1.32.0 | 16 Mar 24 18:16 UTC | 16 Mar 24 18:16 UTC |
	|         | default-k8s-diff-port-683490                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p                                                     | default-k8s-diff-port-683490 | jenkins | v1.32.0 | 16 Mar 24 18:16 UTC | 16 Mar 24 18:16 UTC |
	|         | default-k8s-diff-port-683490                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-683490 | jenkins | v1.32.0 | 16 Mar 24 18:16 UTC | 16 Mar 24 18:16 UTC |
	|         | default-k8s-diff-port-683490                           |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-683490 | jenkins | v1.32.0 | 16 Mar 24 18:16 UTC | 16 Mar 24 18:16 UTC |
	|         | default-k8s-diff-port-683490                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-993416             | newest-cni-993416            | jenkins | v1.32.0 | 16 Mar 24 18:16 UTC | 16 Mar 24 18:16 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-993416                                   | newest-cni-993416            | jenkins | v1.32.0 | 16 Mar 24 18:16 UTC | 16 Mar 24 18:16 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-993416                  | newest-cni-993416            | jenkins | v1.32.0 | 16 Mar 24 18:16 UTC | 16 Mar 24 18:16 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-993416 --memory=2200 --alsologtostderr   | newest-cni-993416            | jenkins | v1.32.0 | 16 Mar 24 18:16 UTC | 16 Mar 24 18:17 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=containerd          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| image   | newest-cni-993416 image list                           | newest-cni-993416            | jenkins | v1.32.0 | 16 Mar 24 18:17 UTC | 16 Mar 24 18:17 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-993416                                   | newest-cni-993416            | jenkins | v1.32.0 | 16 Mar 24 18:17 UTC | 16 Mar 24 18:17 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-993416                                   | newest-cni-993416            | jenkins | v1.32.0 | 16 Mar 24 18:17 UTC | 16 Mar 24 18:17 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-993416                                   | newest-cni-993416            | jenkins | v1.32.0 | 16 Mar 24 18:17 UTC | 16 Mar 24 18:17 UTC |
	| delete  | -p newest-cni-993416                                   | newest-cni-993416            | jenkins | v1.32.0 | 16 Mar 24 18:17 UTC | 16 Mar 24 18:17 UTC |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/16 18:16:53
	Running on machine: ubuntu-20-agent-12
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0316 18:16:53.227422  841431 out.go:291] Setting OutFile to fd 1 ...
	I0316 18:16:53.228035  841431 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0316 18:16:53.228055  841431 out.go:304] Setting ErrFile to fd 2...
	I0316 18:16:53.228062  841431 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0316 18:16:53.228570  841431 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18277-781196/.minikube/bin
	I0316 18:16:53.229606  841431 out.go:298] Setting JSON to false
	I0316 18:16:53.230645  841431 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":86360,"bootTime":1710526653,"procs":191,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0316 18:16:53.230723  841431 start.go:139] virtualization: kvm guest
	I0316 18:16:53.233024  841431 out.go:177] * [newest-cni-993416] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0316 18:16:53.234895  841431 out.go:177]   - MINIKUBE_LOCATION=18277
	I0316 18:16:53.234951  841431 notify.go:220] Checking for updates...
	I0316 18:16:53.236410  841431 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0316 18:16:53.237994  841431 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18277-781196/kubeconfig
	I0316 18:16:53.239420  841431 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18277-781196/.minikube
	I0316 18:16:53.240653  841431 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0316 18:16:53.241899  841431 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0316 18:16:53.243743  841431 config.go:182] Loaded profile config "newest-cni-993416": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.0-rc.2
	I0316 18:16:53.244162  841431 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0316 18:16:53.244226  841431 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 18:16:53.260630  841431 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43931
	I0316 18:16:53.261234  841431 main.go:141] libmachine: () Calling .GetVersion
	I0316 18:16:53.261919  841431 main.go:141] libmachine: Using API Version  1
	I0316 18:16:53.261944  841431 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 18:16:53.262404  841431 main.go:141] libmachine: () Calling .GetMachineName
	I0316 18:16:53.262690  841431 main.go:141] libmachine: (newest-cni-993416) Calling .DriverName
	I0316 18:16:53.263030  841431 driver.go:392] Setting default libvirt URI to qemu:///system
	I0316 18:16:53.263339  841431 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0316 18:16:53.263378  841431 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 18:16:53.279157  841431 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38083
	I0316 18:16:53.279747  841431 main.go:141] libmachine: () Calling .GetVersion
	I0316 18:16:53.280271  841431 main.go:141] libmachine: Using API Version  1
	I0316 18:16:53.280294  841431 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 18:16:53.280635  841431 main.go:141] libmachine: () Calling .GetMachineName
	I0316 18:16:53.280850  841431 main.go:141] libmachine: (newest-cni-993416) Calling .DriverName
	I0316 18:16:53.320020  841431 out.go:177] * Using the kvm2 driver based on existing profile
	I0316 18:16:53.321474  841431 start.go:297] selected driver: kvm2
	I0316 18:16:53.321503  841431 start.go:901] validating driver "kvm2" against &{Name:newest-cni-993416 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.29.0-rc.2 ClusterName:newest-cni-993416 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.228 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:fals
e system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0316 18:16:53.321648  841431 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0316 18:16:53.322409  841431 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0316 18:16:53.322488  841431 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18277-781196/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0316 18:16:53.339422  841431 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0316 18:16:53.339952  841431 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0316 18:16:53.340030  841431 cni.go:84] Creating CNI manager for ""
	I0316 18:16:53.340045  841431 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0316 18:16:53.340083  841431 start.go:340] cluster config:
	{Name:newest-cni-993416 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-993416 Namespace:default APIS
erverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.228 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[]
ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0316 18:16:53.340193  841431 iso.go:125] acquiring lock: {Name:mk48d016d8d435147389d59734ec7ed09e828db8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0316 18:16:53.342110  841431 out.go:177] * Starting "newest-cni-993416" primary control-plane node in "newest-cni-993416" cluster
	I0316 18:16:53.343482  841431 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime containerd
	I0316 18:16:53.343551  841431 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18277-781196/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-containerd-overlay2-amd64.tar.lz4
	I0316 18:16:53.343565  841431 cache.go:56] Caching tarball of preloaded images
	I0316 18:16:53.343690  841431 preload.go:173] Found /home/jenkins/minikube-integration/18277-781196/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0316 18:16:53.343716  841431 cache.go:59] Finished verifying existence of preloaded tar for v1.29.0-rc.2 on containerd
	I0316 18:16:53.343850  841431 profile.go:142] Saving config to /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/newest-cni-993416/config.json ...
	I0316 18:16:53.344068  841431 start.go:360] acquireMachinesLock for newest-cni-993416: {Name:mkf97f06937f9fa972ee38e81e5f88859912f65f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0316 18:16:53.344163  841431 start.go:364] duration metric: took 72.742µs to acquireMachinesLock for "newest-cni-993416"
	I0316 18:16:53.344180  841431 start.go:96] Skipping create...Using existing machine configuration
	I0316 18:16:53.344186  841431 fix.go:54] fixHost starting: 
	I0316 18:16:53.344487  841431 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0316 18:16:53.344525  841431 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 18:16:53.360544  841431 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37539
	I0316 18:16:53.361046  841431 main.go:141] libmachine: () Calling .GetVersion
	I0316 18:16:53.361568  841431 main.go:141] libmachine: Using API Version  1
	I0316 18:16:53.361590  841431 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 18:16:53.361978  841431 main.go:141] libmachine: () Calling .GetMachineName
	I0316 18:16:53.362212  841431 main.go:141] libmachine: (newest-cni-993416) Calling .DriverName
	I0316 18:16:53.362394  841431 main.go:141] libmachine: (newest-cni-993416) Calling .GetState
	I0316 18:16:53.364378  841431 fix.go:112] recreateIfNeeded on newest-cni-993416: state=Stopped err=<nil>
	I0316 18:16:53.364411  841431 main.go:141] libmachine: (newest-cni-993416) Calling .DriverName
	W0316 18:16:53.364597  841431 fix.go:138] unexpected machine state, will restart: <nil>
	I0316 18:16:53.367250  841431 out.go:177] * Restarting existing kvm2 VM for "newest-cni-993416" ...
	I0316 18:16:50.020159  838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
	I0316 18:16:52.518683  838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
	I0316 18:16:53.368632  841431 main.go:141] libmachine: (newest-cni-993416) Calling .Start
	I0316 18:16:53.368897  841431 main.go:141] libmachine: (newest-cni-993416) Ensuring networks are active...
	I0316 18:16:53.369842  841431 main.go:141] libmachine: (newest-cni-993416) Ensuring network default is active
	I0316 18:16:53.370156  841431 main.go:141] libmachine: (newest-cni-993416) Ensuring network mk-newest-cni-993416 is active
	I0316 18:16:53.370552  841431 main.go:141] libmachine: (newest-cni-993416) Getting domain xml...
	I0316 18:16:53.371486  841431 main.go:141] libmachine: (newest-cni-993416) Creating domain...
	I0316 18:16:54.638792  841431 main.go:141] libmachine: (newest-cni-993416) Waiting to get IP...
	I0316 18:16:54.639743  841431 main.go:141] libmachine: (newest-cni-993416) DBG | domain newest-cni-993416 has defined MAC address 52:54:00:73:0d:0a in network mk-newest-cni-993416
	I0316 18:16:54.640202  841431 main.go:141] libmachine: (newest-cni-993416) DBG | unable to find current IP address of domain newest-cni-993416 in network mk-newest-cni-993416
	I0316 18:16:54.640246  841431 main.go:141] libmachine: (newest-cni-993416) DBG | I0316 18:16:54.640159  841466 retry.go:31] will retry after 208.50444ms: waiting for machine to come up
	I0316 18:16:54.850948  841431 main.go:141] libmachine: (newest-cni-993416) DBG | domain newest-cni-993416 has defined MAC address 52:54:00:73:0d:0a in network mk-newest-cni-993416
	I0316 18:16:54.851402  841431 main.go:141] libmachine: (newest-cni-993416) DBG | unable to find current IP address of domain newest-cni-993416 in network mk-newest-cni-993416
	I0316 18:16:54.851470  841431 main.go:141] libmachine: (newest-cni-993416) DBG | I0316 18:16:54.851350  841466 retry.go:31] will retry after 359.013848ms: waiting for machine to come up
	I0316 18:16:55.212276  841431 main.go:141] libmachine: (newest-cni-993416) DBG | domain newest-cni-993416 has defined MAC address 52:54:00:73:0d:0a in network mk-newest-cni-993416
	I0316 18:16:55.212780  841431 main.go:141] libmachine: (newest-cni-993416) DBG | unable to find current IP address of domain newest-cni-993416 in network mk-newest-cni-993416
	I0316 18:16:55.212816  841431 main.go:141] libmachine: (newest-cni-993416) DBG | I0316 18:16:55.212697  841466 retry.go:31] will retry after 307.020465ms: waiting for machine to come up
	I0316 18:16:55.521507  841431 main.go:141] libmachine: (newest-cni-993416) DBG | domain newest-cni-993416 has defined MAC address 52:54:00:73:0d:0a in network mk-newest-cni-993416
	I0316 18:16:55.522128  841431 main.go:141] libmachine: (newest-cni-993416) DBG | unable to find current IP address of domain newest-cni-993416 in network mk-newest-cni-993416
	I0316 18:16:55.522160  841431 main.go:141] libmachine: (newest-cni-993416) DBG | I0316 18:16:55.522086  841466 retry.go:31] will retry after 542.340519ms: waiting for machine to come up
	I0316 18:16:56.065858  841431 main.go:141] libmachine: (newest-cni-993416) DBG | domain newest-cni-993416 has defined MAC address 52:54:00:73:0d:0a in network mk-newest-cni-993416
	I0316 18:16:56.066417  841431 main.go:141] libmachine: (newest-cni-993416) DBG | unable to find current IP address of domain newest-cni-993416 in network mk-newest-cni-993416
	I0316 18:16:56.066443  841431 main.go:141] libmachine: (newest-cni-993416) DBG | I0316 18:16:56.066360  841466 retry.go:31] will retry after 542.386197ms: waiting for machine to come up
	I0316 18:16:56.610202  841431 main.go:141] libmachine: (newest-cni-993416) DBG | domain newest-cni-993416 has defined MAC address 52:54:00:73:0d:0a in network mk-newest-cni-993416
	I0316 18:16:56.610597  841431 main.go:141] libmachine: (newest-cni-993416) DBG | unable to find current IP address of domain newest-cni-993416 in network mk-newest-cni-993416
	I0316 18:16:56.610633  841431 main.go:141] libmachine: (newest-cni-993416) DBG | I0316 18:16:56.610569  841466 retry.go:31] will retry after 665.676296ms: waiting for machine to come up
	I0316 18:16:57.278214  841431 main.go:141] libmachine: (newest-cni-993416) DBG | domain newest-cni-993416 has defined MAC address 52:54:00:73:0d:0a in network mk-newest-cni-993416
	I0316 18:16:57.278730  841431 main.go:141] libmachine: (newest-cni-993416) DBG | unable to find current IP address of domain newest-cni-993416 in network mk-newest-cni-993416
	I0316 18:16:57.278759  841431 main.go:141] libmachine: (newest-cni-993416) DBG | I0316 18:16:57.278684  841466 retry.go:31] will retry after 913.154561ms: waiting for machine to come up
	I0316 18:16:58.193848  841431 main.go:141] libmachine: (newest-cni-993416) DBG | domain newest-cni-993416 has defined MAC address 52:54:00:73:0d:0a in network mk-newest-cni-993416
	I0316 18:16:58.194327  841431 main.go:141] libmachine: (newest-cni-993416) DBG | unable to find current IP address of domain newest-cni-993416 in network mk-newest-cni-993416
	I0316 18:16:58.194347  841431 main.go:141] libmachine: (newest-cni-993416) DBG | I0316 18:16:58.194264  841466 retry.go:31] will retry after 918.549294ms: waiting for machine to come up
	I0316 18:16:54.520757  838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
	I0316 18:16:56.521392  838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
	I0316 18:16:58.521540  838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
	I0316 18:16:59.114563  841431 main.go:141] libmachine: (newest-cni-993416) DBG | domain newest-cni-993416 has defined MAC address 52:54:00:73:0d:0a in network mk-newest-cni-993416
	I0316 18:16:59.115081  841431 main.go:141] libmachine: (newest-cni-993416) DBG | unable to find current IP address of domain newest-cni-993416 in network mk-newest-cni-993416
	I0316 18:16:59.115110  841431 main.go:141] libmachine: (newest-cni-993416) DBG | I0316 18:16:59.115060  841466 retry.go:31] will retry after 1.640225957s: waiting for machine to come up
	I0316 18:17:00.756565  841431 main.go:141] libmachine: (newest-cni-993416) DBG | domain newest-cni-993416 has defined MAC address 52:54:00:73:0d:0a in network mk-newest-cni-993416
	I0316 18:17:00.757032  841431 main.go:141] libmachine: (newest-cni-993416) DBG | unable to find current IP address of domain newest-cni-993416 in network mk-newest-cni-993416
	I0316 18:17:00.757064  841431 main.go:141] libmachine: (newest-cni-993416) DBG | I0316 18:17:00.756967  841466 retry.go:31] will retry after 1.524971609s: waiting for machine to come up
	I0316 18:17:02.283964  841431 main.go:141] libmachine: (newest-cni-993416) DBG | domain newest-cni-993416 has defined MAC address 52:54:00:73:0d:0a in network mk-newest-cni-993416
	I0316 18:17:02.284601  841431 main.go:141] libmachine: (newest-cni-993416) DBG | unable to find current IP address of domain newest-cni-993416 in network mk-newest-cni-993416
	I0316 18:17:02.284637  841431 main.go:141] libmachine: (newest-cni-993416) DBG | I0316 18:17:02.284534  841466 retry.go:31] will retry after 2.005667021s: waiting for machine to come up
	I0316 18:17:01.019106  838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
	I0316 18:17:03.020683  838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
	I0316 18:17:04.291575  841431 main.go:141] libmachine: (newest-cni-993416) DBG | domain newest-cni-993416 has defined MAC address 52:54:00:73:0d:0a in network mk-newest-cni-993416
	I0316 18:17:04.292157  841431 main.go:141] libmachine: (newest-cni-993416) DBG | unable to find current IP address of domain newest-cni-993416 in network mk-newest-cni-993416
	I0316 18:17:04.292184  841431 main.go:141] libmachine: (newest-cni-993416) DBG | I0316 18:17:04.292082  841466 retry.go:31] will retry after 2.262780898s: waiting for machine to come up
	I0316 18:17:06.557963  841431 main.go:141] libmachine: (newest-cni-993416) DBG | domain newest-cni-993416 has defined MAC address 52:54:00:73:0d:0a in network mk-newest-cni-993416
	I0316 18:17:06.558485  841431 main.go:141] libmachine: (newest-cni-993416) DBG | unable to find current IP address of domain newest-cni-993416 in network mk-newest-cni-993416
	I0316 18:17:06.558531  841431 main.go:141] libmachine: (newest-cni-993416) DBG | I0316 18:17:06.558429  841466 retry.go:31] will retry after 3.717938959s: waiting for machine to come up
	I0316 18:17:05.520962  838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
	I0316 18:17:07.521498  838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
	I0316 18:17:10.279363  841431 main.go:141] libmachine: (newest-cni-993416) DBG | domain newest-cni-993416 has defined MAC address 52:54:00:73:0d:0a in network mk-newest-cni-993416
	I0316 18:17:10.279979  841431 main.go:141] libmachine: (newest-cni-993416) DBG | unable to find current IP address of domain newest-cni-993416 in network mk-newest-cni-993416
	I0316 18:17:10.280013  841431 main.go:141] libmachine: (newest-cni-993416) DBG | I0316 18:17:10.279896  841466 retry.go:31] will retry after 4.612576288s: waiting for machine to come up
	I0316 18:17:10.019748  838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
	I0316 18:17:12.020707  838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
	I0316 18:17:14.894517  841431 main.go:141] libmachine: (newest-cni-993416) DBG | domain newest-cni-993416 has defined MAC address 52:54:00:73:0d:0a in network mk-newest-cni-993416
	I0316 18:17:14.895091  841431 main.go:141] libmachine: (newest-cni-993416) Found IP for machine: 192.168.72.228
	I0316 18:17:14.895117  841431 main.go:141] libmachine: (newest-cni-993416) DBG | domain newest-cni-993416 has current primary IP address 192.168.72.228 and MAC address 52:54:00:73:0d:0a in network mk-newest-cni-993416
	I0316 18:17:14.895123  841431 main.go:141] libmachine: (newest-cni-993416) Reserving static IP address...
	I0316 18:17:14.895619  841431 main.go:141] libmachine: (newest-cni-993416) DBG | found host DHCP lease matching {name: "newest-cni-993416", mac: "52:54:00:73:0d:0a", ip: "192.168.72.228"} in network mk-newest-cni-993416: {Iface:virbr4 ExpiryTime:2024-03-16 19:17:06 +0000 UTC Type:0 Mac:52:54:00:73:0d:0a Iaid: IPaddr:192.168.72.228 Prefix:24 Hostname:newest-cni-993416 Clientid:01:52:54:00:73:0d:0a}
	I0316 18:17:14.895668  841431 main.go:141] libmachine: (newest-cni-993416) DBG | skip adding static IP to network mk-newest-cni-993416 - found existing host DHCP lease matching {name: "newest-cni-993416", mac: "52:54:00:73:0d:0a", ip: "192.168.72.228"}
	I0316 18:17:14.895682  841431 main.go:141] libmachine: (newest-cni-993416) Reserved static IP address: 192.168.72.228
	I0316 18:17:14.895695  841431 main.go:141] libmachine: (newest-cni-993416) Waiting for SSH to be available...
	I0316 18:17:14.895711  841431 main.go:141] libmachine: (newest-cni-993416) DBG | Getting to WaitForSSH function...
	I0316 18:17:14.898142  841431 main.go:141] libmachine: (newest-cni-993416) DBG | domain newest-cni-993416 has defined MAC address 52:54:00:73:0d:0a in network mk-newest-cni-993416
	I0316 18:17:14.898527  841431 main.go:141] libmachine: (newest-cni-993416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:0d:0a", ip: ""} in network mk-newest-cni-993416: {Iface:virbr4 ExpiryTime:2024-03-16 19:17:06 +0000 UTC Type:0 Mac:52:54:00:73:0d:0a Iaid: IPaddr:192.168.72.228 Prefix:24 Hostname:newest-cni-993416 Clientid:01:52:54:00:73:0d:0a}
	I0316 18:17:14.898562  841431 main.go:141] libmachine: (newest-cni-993416) DBG | domain newest-cni-993416 has defined IP address 192.168.72.228 and MAC address 52:54:00:73:0d:0a in network mk-newest-cni-993416
	I0316 18:17:14.898672  841431 main.go:141] libmachine: (newest-cni-993416) DBG | Using SSH client type: external
	I0316 18:17:14.898706  841431 main.go:141] libmachine: (newest-cni-993416) DBG | Using SSH private key: /home/jenkins/minikube-integration/18277-781196/.minikube/machines/newest-cni-993416/id_rsa (-rw-------)
	I0316 18:17:14.898730  841431 main.go:141] libmachine: (newest-cni-993416) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.228 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18277-781196/.minikube/machines/newest-cni-993416/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0316 18:17:14.898741  841431 main.go:141] libmachine: (newest-cni-993416) DBG | About to run SSH command:
	I0316 18:17:14.898758  841431 main.go:141] libmachine: (newest-cni-993416) DBG | exit 0
	I0316 18:17:15.036536  841431 main.go:141] libmachine: (newest-cni-993416) DBG | SSH cmd err, output: <nil>: 
	I0316 18:17:15.036959  841431 main.go:141] libmachine: (newest-cni-993416) Calling .GetConfigRaw
	I0316 18:17:15.037625  841431 main.go:141] libmachine: (newest-cni-993416) Calling .GetIP
	I0316 18:17:15.040416  841431 main.go:141] libmachine: (newest-cni-993416) DBG | domain newest-cni-993416 has defined MAC address 52:54:00:73:0d:0a in network mk-newest-cni-993416
	I0316 18:17:15.040862  841431 main.go:141] libmachine: (newest-cni-993416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:0d:0a", ip: ""} in network mk-newest-cni-993416: {Iface:virbr4 ExpiryTime:2024-03-16 19:17:06 +0000 UTC Type:0 Mac:52:54:00:73:0d:0a Iaid: IPaddr:192.168.72.228 Prefix:24 Hostname:newest-cni-993416 Clientid:01:52:54:00:73:0d:0a}
	I0316 18:17:15.040901  841431 main.go:141] libmachine: (newest-cni-993416) DBG | domain newest-cni-993416 has defined IP address 192.168.72.228 and MAC address 52:54:00:73:0d:0a in network mk-newest-cni-993416
	I0316 18:17:15.041163  841431 profile.go:142] Saving config to /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/newest-cni-993416/config.json ...
	I0316 18:17:15.041566  841431 machine.go:94] provisionDockerMachine start ...
	I0316 18:17:15.041598  841431 main.go:141] libmachine: (newest-cni-993416) Calling .DriverName
	I0316 18:17:15.041905  841431 main.go:141] libmachine: (newest-cni-993416) Calling .GetSSHHostname
	I0316 18:17:15.044592  841431 main.go:141] libmachine: (newest-cni-993416) DBG | domain newest-cni-993416 has defined MAC address 52:54:00:73:0d:0a in network mk-newest-cni-993416
	I0316 18:17:15.044969  841431 main.go:141] libmachine: (newest-cni-993416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:0d:0a", ip: ""} in network mk-newest-cni-993416: {Iface:virbr4 ExpiryTime:2024-03-16 19:17:06 +0000 UTC Type:0 Mac:52:54:00:73:0d:0a Iaid: IPaddr:192.168.72.228 Prefix:24 Hostname:newest-cni-993416 Clientid:01:52:54:00:73:0d:0a}
	I0316 18:17:15.045012  841431 main.go:141] libmachine: (newest-cni-993416) DBG | domain newest-cni-993416 has defined IP address 192.168.72.228 and MAC address 52:54:00:73:0d:0a in network mk-newest-cni-993416
	I0316 18:17:15.045186  841431 main.go:141] libmachine: (newest-cni-993416) Calling .GetSSHPort
	I0316 18:17:15.045443  841431 main.go:141] libmachine: (newest-cni-993416) Calling .GetSSHKeyPath
	I0316 18:17:15.045620  841431 main.go:141] libmachine: (newest-cni-993416) Calling .GetSSHKeyPath
	I0316 18:17:15.045755  841431 main.go:141] libmachine: (newest-cni-993416) Calling .GetSSHUsername
	I0316 18:17:15.045935  841431 main.go:141] libmachine: Using SSH client type: native
	I0316 18:17:15.046253  841431 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.228 22 <nil> <nil>}
	I0316 18:17:15.046270  841431 main.go:141] libmachine: About to run SSH command:
	hostname
	I0316 18:17:15.165086  841431 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0316 18:17:15.165121  841431 main.go:141] libmachine: (newest-cni-993416) Calling .GetMachineName
	I0316 18:17:15.165450  841431 buildroot.go:166] provisioning hostname "newest-cni-993416"
	I0316 18:17:15.165479  841431 main.go:141] libmachine: (newest-cni-993416) Calling .GetMachineName
	I0316 18:17:15.165697  841431 main.go:141] libmachine: (newest-cni-993416) Calling .GetSSHHostname
	I0316 18:17:15.168728  841431 main.go:141] libmachine: (newest-cni-993416) DBG | domain newest-cni-993416 has defined MAC address 52:54:00:73:0d:0a in network mk-newest-cni-993416
	I0316 18:17:15.169061  841431 main.go:141] libmachine: (newest-cni-993416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:0d:0a", ip: ""} in network mk-newest-cni-993416: {Iface:virbr4 ExpiryTime:2024-03-16 19:17:06 +0000 UTC Type:0 Mac:52:54:00:73:0d:0a Iaid: IPaddr:192.168.72.228 Prefix:24 Hostname:newest-cni-993416 Clientid:01:52:54:00:73:0d:0a}
	I0316 18:17:15.169102  841431 main.go:141] libmachine: (newest-cni-993416) DBG | domain newest-cni-993416 has defined IP address 192.168.72.228 and MAC address 52:54:00:73:0d:0a in network mk-newest-cni-993416
	I0316 18:17:15.169253  841431 main.go:141] libmachine: (newest-cni-993416) Calling .GetSSHPort
	I0316 18:17:15.169477  841431 main.go:141] libmachine: (newest-cni-993416) Calling .GetSSHKeyPath
	I0316 18:17:15.169664  841431 main.go:141] libmachine: (newest-cni-993416) Calling .GetSSHKeyPath
	I0316 18:17:15.169813  841431 main.go:141] libmachine: (newest-cni-993416) Calling .GetSSHUsername
	I0316 18:17:15.169947  841431 main.go:141] libmachine: Using SSH client type: native
	I0316 18:17:15.170167  841431 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.228 22 <nil> <nil>}
	I0316 18:17:15.170187  841431 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-993416 && echo "newest-cni-993416" | sudo tee /etc/hostname
	I0316 18:17:15.308584  841431 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-993416
	
	I0316 18:17:15.308618  841431 main.go:141] libmachine: (newest-cni-993416) Calling .GetSSHHostname
	I0316 18:17:15.311584  841431 main.go:141] libmachine: (newest-cni-993416) DBG | domain newest-cni-993416 has defined MAC address 52:54:00:73:0d:0a in network mk-newest-cni-993416
	I0316 18:17:15.311985  841431 main.go:141] libmachine: (newest-cni-993416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:0d:0a", ip: ""} in network mk-newest-cni-993416: {Iface:virbr4 ExpiryTime:2024-03-16 19:17:06 +0000 UTC Type:0 Mac:52:54:00:73:0d:0a Iaid: IPaddr:192.168.72.228 Prefix:24 Hostname:newest-cni-993416 Clientid:01:52:54:00:73:0d:0a}
	I0316 18:17:15.312017  841431 main.go:141] libmachine: (newest-cni-993416) DBG | domain newest-cni-993416 has defined IP address 192.168.72.228 and MAC address 52:54:00:73:0d:0a in network mk-newest-cni-993416
	I0316 18:17:15.312250  841431 main.go:141] libmachine: (newest-cni-993416) Calling .GetSSHPort
	I0316 18:17:15.312508  841431 main.go:141] libmachine: (newest-cni-993416) Calling .GetSSHKeyPath
	I0316 18:17:15.312667  841431 main.go:141] libmachine: (newest-cni-993416) Calling .GetSSHKeyPath
	I0316 18:17:15.312780  841431 main.go:141] libmachine: (newest-cni-993416) Calling .GetSSHUsername
	I0316 18:17:15.312985  841431 main.go:141] libmachine: Using SSH client type: native
	I0316 18:17:15.313177  841431 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.228 22 <nil> <nil>}
	I0316 18:17:15.313203  841431 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-993416' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-993416/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-993416' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0316 18:17:15.445260  841431 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0316 18:17:15.445295  841431 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18277-781196/.minikube CaCertPath:/home/jenkins/minikube-integration/18277-781196/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18277-781196/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18277-781196/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18277-781196/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18277-781196/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18277-781196/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18277-781196/.minikube}
	I0316 18:17:15.445351  841431 buildroot.go:174] setting up certificates
	I0316 18:17:15.445362  841431 provision.go:84] configureAuth start
	I0316 18:17:15.445376  841431 main.go:141] libmachine: (newest-cni-993416) Calling .GetMachineName
	I0316 18:17:15.445750  841431 main.go:141] libmachine: (newest-cni-993416) Calling .GetIP
	I0316 18:17:15.448920  841431 main.go:141] libmachine: (newest-cni-993416) DBG | domain newest-cni-993416 has defined MAC address 52:54:00:73:0d:0a in network mk-newest-cni-993416
	I0316 18:17:15.449246  841431 main.go:141] libmachine: (newest-cni-993416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:0d:0a", ip: ""} in network mk-newest-cni-993416: {Iface:virbr4 ExpiryTime:2024-03-16 19:17:06 +0000 UTC Type:0 Mac:52:54:00:73:0d:0a Iaid: IPaddr:192.168.72.228 Prefix:24 Hostname:newest-cni-993416 Clientid:01:52:54:00:73:0d:0a}
	I0316 18:17:15.449275  841431 main.go:141] libmachine: (newest-cni-993416) DBG | domain newest-cni-993416 has defined IP address 192.168.72.228 and MAC address 52:54:00:73:0d:0a in network mk-newest-cni-993416
	I0316 18:17:15.449422  841431 main.go:141] libmachine: (newest-cni-993416) Calling .GetSSHHostname
	I0316 18:17:15.451623  841431 main.go:141] libmachine: (newest-cni-993416) DBG | domain newest-cni-993416 has defined MAC address 52:54:00:73:0d:0a in network mk-newest-cni-993416
	I0316 18:17:15.452046  841431 main.go:141] libmachine: (newest-cni-993416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:0d:0a", ip: ""} in network mk-newest-cni-993416: {Iface:virbr4 ExpiryTime:2024-03-16 19:17:06 +0000 UTC Type:0 Mac:52:54:00:73:0d:0a Iaid: IPaddr:192.168.72.228 Prefix:24 Hostname:newest-cni-993416 Clientid:01:52:54:00:73:0d:0a}
	I0316 18:17:15.452096  841431 main.go:141] libmachine: (newest-cni-993416) DBG | domain newest-cni-993416 has defined IP address 192.168.72.228 and MAC address 52:54:00:73:0d:0a in network mk-newest-cni-993416
	I0316 18:17:15.452243  841431 provision.go:143] copyHostCerts
	I0316 18:17:15.452326  841431 exec_runner.go:144] found /home/jenkins/minikube-integration/18277-781196/.minikube/key.pem, removing ...
	I0316 18:17:15.452338  841431 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18277-781196/.minikube/key.pem
	I0316 18:17:15.452405  841431 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18277-781196/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18277-781196/.minikube/key.pem (1675 bytes)
	I0316 18:17:15.452522  841431 exec_runner.go:144] found /home/jenkins/minikube-integration/18277-781196/.minikube/ca.pem, removing ...
	I0316 18:17:15.452532  841431 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18277-781196/.minikube/ca.pem
	I0316 18:17:15.452563  841431 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18277-781196/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18277-781196/.minikube/ca.pem (1082 bytes)
	I0316 18:17:15.452660  841431 exec_runner.go:144] found /home/jenkins/minikube-integration/18277-781196/.minikube/cert.pem, removing ...
	I0316 18:17:15.452676  841431 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18277-781196/.minikube/cert.pem
	I0316 18:17:15.452719  841431 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18277-781196/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18277-781196/.minikube/cert.pem (1123 bytes)
	I0316 18:17:15.452818  841431 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18277-781196/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18277-781196/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18277-781196/.minikube/certs/ca-key.pem org=jenkins.newest-cni-993416 san=[127.0.0.1 192.168.72.228 localhost minikube newest-cni-993416]
	I0316 18:17:15.565115  841431 provision.go:177] copyRemoteCerts
	I0316 18:17:15.565188  841431 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0316 18:17:15.565228  841431 main.go:141] libmachine: (newest-cni-993416) Calling .GetSSHHostname
	I0316 18:17:15.568227  841431 main.go:141] libmachine: (newest-cni-993416) DBG | domain newest-cni-993416 has defined MAC address 52:54:00:73:0d:0a in network mk-newest-cni-993416
	I0316 18:17:15.568683  841431 main.go:141] libmachine: (newest-cni-993416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:0d:0a", ip: ""} in network mk-newest-cni-993416: {Iface:virbr4 ExpiryTime:2024-03-16 19:17:06 +0000 UTC Type:0 Mac:52:54:00:73:0d:0a Iaid: IPaddr:192.168.72.228 Prefix:24 Hostname:newest-cni-993416 Clientid:01:52:54:00:73:0d:0a}
	I0316 18:17:15.568713  841431 main.go:141] libmachine: (newest-cni-993416) DBG | domain newest-cni-993416 has defined IP address 192.168.72.228 and MAC address 52:54:00:73:0d:0a in network mk-newest-cni-993416
	I0316 18:17:15.569003  841431 main.go:141] libmachine: (newest-cni-993416) Calling .GetSSHPort
	I0316 18:17:15.569248  841431 main.go:141] libmachine: (newest-cni-993416) Calling .GetSSHKeyPath
	I0316 18:17:15.569484  841431 main.go:141] libmachine: (newest-cni-993416) Calling .GetSSHUsername
	I0316 18:17:15.569685  841431 sshutil.go:53] new ssh client: &{IP:192.168.72.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18277-781196/.minikube/machines/newest-cni-993416/id_rsa Username:docker}
	I0316 18:17:15.660879  841431 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18277-781196/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0316 18:17:15.691404  841431 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18277-781196/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0316 18:17:15.725806  841431 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18277-781196/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0316 18:17:15.755915  841431 provision.go:87] duration metric: took 310.536281ms to configureAuth
	I0316 18:17:15.755947  841431 buildroot.go:189] setting minikube options for container-runtime
	I0316 18:17:15.756143  841431 config.go:182] Loaded profile config "newest-cni-993416": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.0-rc.2
	I0316 18:17:15.756154  841431 machine.go:97] duration metric: took 714.570228ms to provisionDockerMachine
	I0316 18:17:15.756163  841431 start.go:293] postStartSetup for "newest-cni-993416" (driver="kvm2")
	I0316 18:17:15.756177  841431 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0316 18:17:15.756212  841431 main.go:141] libmachine: (newest-cni-993416) Calling .DriverName
	I0316 18:17:15.756603  841431 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0316 18:17:15.756655  841431 main.go:141] libmachine: (newest-cni-993416) Calling .GetSSHHostname
	I0316 18:17:15.759498  841431 main.go:141] libmachine: (newest-cni-993416) DBG | domain newest-cni-993416 has defined MAC address 52:54:00:73:0d:0a in network mk-newest-cni-993416
	I0316 18:17:15.759902  841431 main.go:141] libmachine: (newest-cni-993416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:0d:0a", ip: ""} in network mk-newest-cni-993416: {Iface:virbr4 ExpiryTime:2024-03-16 19:17:06 +0000 UTC Type:0 Mac:52:54:00:73:0d:0a Iaid: IPaddr:192.168.72.228 Prefix:24 Hostname:newest-cni-993416 Clientid:01:52:54:00:73:0d:0a}
	I0316 18:17:15.759931  841431 main.go:141] libmachine: (newest-cni-993416) DBG | domain newest-cni-993416 has defined IP address 192.168.72.228 and MAC address 52:54:00:73:0d:0a in network mk-newest-cni-993416
	I0316 18:17:15.760147  841431 main.go:141] libmachine: (newest-cni-993416) Calling .GetSSHPort
	I0316 18:17:15.760360  841431 main.go:141] libmachine: (newest-cni-993416) Calling .GetSSHKeyPath
	I0316 18:17:15.760511  841431 main.go:141] libmachine: (newest-cni-993416) Calling .GetSSHUsername
	I0316 18:17:15.760640  841431 sshutil.go:53] new ssh client: &{IP:192.168.72.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18277-781196/.minikube/machines/newest-cni-993416/id_rsa Username:docker}
	I0316 18:17:15.853438  841431 ssh_runner.go:195] Run: cat /etc/os-release
	I0316 18:17:15.858894  841431 info.go:137] Remote host: Buildroot 2023.02.9
	I0316 18:17:15.858927  841431 filesync.go:126] Scanning /home/jenkins/minikube-integration/18277-781196/.minikube/addons for local assets ...
	I0316 18:17:15.858987  841431 filesync.go:126] Scanning /home/jenkins/minikube-integration/18277-781196/.minikube/files for local assets ...
	I0316 18:17:15.859061  841431 filesync.go:149] local asset: /home/jenkins/minikube-integration/18277-781196/.minikube/files/etc/ssl/certs/7884422.pem -> 7884422.pem in /etc/ssl/certs
	I0316 18:17:15.859151  841431 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0316 18:17:15.872026  841431 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18277-781196/.minikube/files/etc/ssl/certs/7884422.pem --> /etc/ssl/certs/7884422.pem (1708 bytes)
	I0316 18:17:15.901994  841431 start.go:296] duration metric: took 145.809588ms for postStartSetup
	I0316 18:17:15.902056  841431 fix.go:56] duration metric: took 22.557868796s for fixHost
	I0316 18:17:15.902086  841431 main.go:141] libmachine: (newest-cni-993416) Calling .GetSSHHostname
	I0316 18:17:15.905039  841431 main.go:141] libmachine: (newest-cni-993416) DBG | domain newest-cni-993416 has defined MAC address 52:54:00:73:0d:0a in network mk-newest-cni-993416
	I0316 18:17:15.905391  841431 main.go:141] libmachine: (newest-cni-993416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:0d:0a", ip: ""} in network mk-newest-cni-993416: {Iface:virbr4 ExpiryTime:2024-03-16 19:17:06 +0000 UTC Type:0 Mac:52:54:00:73:0d:0a Iaid: IPaddr:192.168.72.228 Prefix:24 Hostname:newest-cni-993416 Clientid:01:52:54:00:73:0d:0a}
	I0316 18:17:15.905422  841431 main.go:141] libmachine: (newest-cni-993416) DBG | domain newest-cni-993416 has defined IP address 192.168.72.228 and MAC address 52:54:00:73:0d:0a in network mk-newest-cni-993416
	I0316 18:17:15.905734  841431 main.go:141] libmachine: (newest-cni-993416) Calling .GetSSHPort
	I0316 18:17:15.905939  841431 main.go:141] libmachine: (newest-cni-993416) Calling .GetSSHKeyPath
	I0316 18:17:15.906099  841431 main.go:141] libmachine: (newest-cni-993416) Calling .GetSSHKeyPath
	I0316 18:17:15.906230  841431 main.go:141] libmachine: (newest-cni-993416) Calling .GetSSHUsername
	I0316 18:17:15.906386  841431 main.go:141] libmachine: Using SSH client type: native
	I0316 18:17:15.906652  841431 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.228 22 <nil> <nil>}
	I0316 18:17:15.906668  841431 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0316 18:17:16.025541  841431 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710613036.006722537
	
	I0316 18:17:16.025567  841431 fix.go:216] guest clock: 1710613036.006722537
	I0316 18:17:16.025577  841431 fix.go:229] Guest: 2024-03-16 18:17:16.006722537 +0000 UTC Remote: 2024-03-16 18:17:15.902062825 +0000 UTC m=+22.725621869 (delta=104.659712ms)
	I0316 18:17:16.025634  841431 fix.go:200] guest clock delta is within tolerance: 104.659712ms
	I0316 18:17:16.025641  841431 start.go:83] releasing machines lock for "newest-cni-993416", held for 22.681465652s
	I0316 18:17:16.025671  841431 main.go:141] libmachine: (newest-cni-993416) Calling .DriverName
	I0316 18:17:16.025987  841431 main.go:141] libmachine: (newest-cni-993416) Calling .GetIP
	I0316 18:17:16.028606  841431 main.go:141] libmachine: (newest-cni-993416) DBG | domain newest-cni-993416 has defined MAC address 52:54:00:73:0d:0a in network mk-newest-cni-993416
	I0316 18:17:16.028956  841431 main.go:141] libmachine: (newest-cni-993416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:0d:0a", ip: ""} in network mk-newest-cni-993416: {Iface:virbr4 ExpiryTime:2024-03-16 19:17:06 +0000 UTC Type:0 Mac:52:54:00:73:0d:0a Iaid: IPaddr:192.168.72.228 Prefix:24 Hostname:newest-cni-993416 Clientid:01:52:54:00:73:0d:0a}
	I0316 18:17:16.028982  841431 main.go:141] libmachine: (newest-cni-993416) DBG | domain newest-cni-993416 has defined IP address 192.168.72.228 and MAC address 52:54:00:73:0d:0a in network mk-newest-cni-993416
	I0316 18:17:16.029138  841431 main.go:141] libmachine: (newest-cni-993416) Calling .DriverName
	I0316 18:17:16.029766  841431 main.go:141] libmachine: (newest-cni-993416) Calling .DriverName
	I0316 18:17:16.030018  841431 main.go:141] libmachine: (newest-cni-993416) Calling .DriverName
	I0316 18:17:16.030150  841431 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0316 18:17:16.030235  841431 main.go:141] libmachine: (newest-cni-993416) Calling .GetSSHHostname
	I0316 18:17:16.030305  841431 ssh_runner.go:195] Run: cat /version.json
	I0316 18:17:16.030333  841431 main.go:141] libmachine: (newest-cni-993416) Calling .GetSSHHostname
	I0316 18:17:16.033028  841431 main.go:141] libmachine: (newest-cni-993416) DBG | domain newest-cni-993416 has defined MAC address 52:54:00:73:0d:0a in network mk-newest-cni-993416
	I0316 18:17:16.033349  841431 main.go:141] libmachine: (newest-cni-993416) DBG | domain newest-cni-993416 has defined MAC address 52:54:00:73:0d:0a in network mk-newest-cni-993416
	I0316 18:17:16.033393  841431 main.go:141] libmachine: (newest-cni-993416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:0d:0a", ip: ""} in network mk-newest-cni-993416: {Iface:virbr4 ExpiryTime:2024-03-16 19:17:06 +0000 UTC Type:0 Mac:52:54:00:73:0d:0a Iaid: IPaddr:192.168.72.228 Prefix:24 Hostname:newest-cni-993416 Clientid:01:52:54:00:73:0d:0a}
	I0316 18:17:16.033416  841431 main.go:141] libmachine: (newest-cni-993416) DBG | domain newest-cni-993416 has defined IP address 192.168.72.228 and MAC address 52:54:00:73:0d:0a in network mk-newest-cni-993416
	I0316 18:17:16.033554  841431 main.go:141] libmachine: (newest-cni-993416) Calling .GetSSHPort
	I0316 18:17:16.033791  841431 main.go:141] libmachine: (newest-cni-993416) Calling .GetSSHKeyPath
	I0316 18:17:16.033902  841431 main.go:141] libmachine: (newest-cni-993416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:0d:0a", ip: ""} in network mk-newest-cni-993416: {Iface:virbr4 ExpiryTime:2024-03-16 19:17:06 +0000 UTC Type:0 Mac:52:54:00:73:0d:0a Iaid: IPaddr:192.168.72.228 Prefix:24 Hostname:newest-cni-993416 Clientid:01:52:54:00:73:0d:0a}
	I0316 18:17:16.033929  841431 main.go:141] libmachine: (newest-cni-993416) DBG | domain newest-cni-993416 has defined IP address 192.168.72.228 and MAC address 52:54:00:73:0d:0a in network mk-newest-cni-993416
	I0316 18:17:16.033963  841431 main.go:141] libmachine: (newest-cni-993416) Calling .GetSSHUsername
	I0316 18:17:16.034038  841431 main.go:141] libmachine: (newest-cni-993416) Calling .GetSSHPort
	I0316 18:17:16.034148  841431 sshutil.go:53] new ssh client: &{IP:192.168.72.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18277-781196/.minikube/machines/newest-cni-993416/id_rsa Username:docker}
	I0316 18:17:16.034265  841431 main.go:141] libmachine: (newest-cni-993416) Calling .GetSSHKeyPath
	I0316 18:17:16.034456  841431 main.go:141] libmachine: (newest-cni-993416) Calling .GetSSHUsername
	I0316 18:17:16.034640  841431 sshutil.go:53] new ssh client: &{IP:192.168.72.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18277-781196/.minikube/machines/newest-cni-993416/id_rsa Username:docker}
	I0316 18:17:16.118048  841431 ssh_runner.go:195] Run: systemctl --version
	I0316 18:17:16.146259  841431 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0316 18:17:16.154503  841431 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0316 18:17:16.154585  841431 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0316 18:17:16.177501  841431 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0316 18:17:16.177539  841431 start.go:494] detecting cgroup driver to use...
	I0316 18:17:16.177624  841431 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0316 18:17:16.214268  841431 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0316 18:17:16.231541  841431 docker.go:217] disabling cri-docker service (if available) ...
	I0316 18:17:16.231611  841431 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0316 18:17:16.249494  841431 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0316 18:17:16.266543  841431 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0316 18:17:16.396368  841431 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0316 18:17:16.568119  841431 docker.go:233] disabling docker service ...
	I0316 18:17:16.568275  841431 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0316 18:17:16.587606  841431 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0316 18:17:16.603814  841431 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0316 18:17:16.753806  841431 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0316 18:17:16.907508  841431 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0316 18:17:16.925332  841431 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0316 18:17:16.950811  841431 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0316 18:17:16.966511  841431 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0316 18:17:16.981307  841431 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0316 18:17:16.981402  841431 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0316 18:17:16.995896  841431 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0316 18:17:17.010189  841431 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0316 18:17:17.027988  841431 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0316 18:17:17.042158  841431 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0316 18:17:17.056955  841431 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0316 18:17:17.071564  841431 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0316 18:17:17.084678  841431 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0316 18:17:17.084760  841431 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0316 18:17:17.102942  841431 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0316 18:17:17.116045  841431 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0316 18:17:17.254390  841431 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0316 18:17:17.288841  841431 start.go:541] Will wait 60s for socket path /run/containerd/containerd.sock
	I0316 18:17:17.288923  841431 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0316 18:17:17.294823  841431 retry.go:31] will retry after 1.431471638s: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I0316 18:17:18.727391  841431 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0316 18:17:18.733834  841431 start.go:562] Will wait 60s for crictl version
	I0316 18:17:18.733903  841431 ssh_runner.go:195] Run: which crictl
	I0316 18:17:18.739046  841431 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0316 18:17:18.791238  841431 start.go:578] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.7.14
	RuntimeApiVersion:  v1
	I0316 18:17:18.791309  841431 ssh_runner.go:195] Run: containerd --version
	I0316 18:17:18.830819  841431 ssh_runner.go:195] Run: containerd --version
	I0316 18:17:18.872315  841431 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on containerd 1.7.14 ...
	I0316 18:17:18.873653  841431 main.go:141] libmachine: (newest-cni-993416) Calling .GetIP
	I0316 18:17:18.876402  841431 main.go:141] libmachine: (newest-cni-993416) DBG | domain newest-cni-993416 has defined MAC address 52:54:00:73:0d:0a in network mk-newest-cni-993416
	I0316 18:17:18.876758  841431 main.go:141] libmachine: (newest-cni-993416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:0d:0a", ip: ""} in network mk-newest-cni-993416: {Iface:virbr4 ExpiryTime:2024-03-16 19:17:06 +0000 UTC Type:0 Mac:52:54:00:73:0d:0a Iaid: IPaddr:192.168.72.228 Prefix:24 Hostname:newest-cni-993416 Clientid:01:52:54:00:73:0d:0a}
	I0316 18:17:18.876791  841431 main.go:141] libmachine: (newest-cni-993416) DBG | domain newest-cni-993416 has defined IP address 192.168.72.228 and MAC address 52:54:00:73:0d:0a in network mk-newest-cni-993416
	I0316 18:17:18.876986  841431 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0316 18:17:18.882277  841431 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0316 18:17:18.902779  841431 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0316 18:17:14.519518  838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
	I0316 18:17:16.519651  838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
	I0316 18:17:19.019366  838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
	I0316 18:17:18.904376  841431 kubeadm.go:877] updating cluster {Name:newest-cni-993416 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.29.0-rc.2 ClusterName:newest-cni-993416 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.228 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:t
rue] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0316 18:17:18.904552  841431 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime containerd
	I0316 18:17:18.904644  841431 ssh_runner.go:195] Run: sudo crictl images --output json
	I0316 18:17:18.951816  841431 containerd.go:612] all images are preloaded for containerd runtime.
	I0316 18:17:18.951843  841431 containerd.go:519] Images already preloaded, skipping extraction
	I0316 18:17:18.951903  841431 ssh_runner.go:195] Run: sudo crictl images --output json
	I0316 18:17:18.998694  841431 containerd.go:612] all images are preloaded for containerd runtime.
	I0316 18:17:18.998725  841431 cache_images.go:84] Images are preloaded, skipping loading
	I0316 18:17:18.998737  841431 kubeadm.go:928] updating node { 192.168.72.228 8443 v1.29.0-rc.2 containerd true true} ...
	I0316 18:17:18.998890  841431 kubeadm.go:940] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --feature-gates=ServerSideApply=true --hostname-override=newest-cni-993416 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.228
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-993416 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0316 18:17:18.998969  841431 ssh_runner.go:195] Run: sudo crictl info
	I0316 18:17:19.053845  841431 cni.go:84] Creating CNI manager for ""
	I0316 18:17:19.053877  841431 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0316 18:17:19.053894  841431 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0316 18:17:19.053947  841431 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.72.228 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-993416 NodeName:newest-cni-993416 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.228"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] F
eatureArgs:map[] NodeIP:192.168.72.228 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0316 18:17:19.054110  841431 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.228
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-993416"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.228
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.228"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0316 18:17:19.054203  841431 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0316 18:17:19.069549  841431 binaries.go:44] Found k8s binaries, skipping transfer
	I0316 18:17:19.069638  841431 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0316 18:17:19.081418  841431 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (365 bytes)
	I0316 18:17:19.102862  841431 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0316 18:17:19.124134  841431 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2306 bytes)
	I0316 18:17:19.146599  841431 ssh_runner.go:195] Run: grep 192.168.72.228	control-plane.minikube.internal$ /etc/hosts
	I0316 18:17:19.151909  841431 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.228	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0316 18:17:19.169197  841431 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0316 18:17:19.309000  841431 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0316 18:17:19.331332  841431 certs.go:68] Setting up /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/newest-cni-993416 for IP: 192.168.72.228
	I0316 18:17:19.331366  841431 certs.go:194] generating shared ca certs ...
	I0316 18:17:19.331389  841431 certs.go:226] acquiring lock for ca certs: {Name:mk0c50354a81ee6e126f21f3d5a16214134194fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0316 18:17:19.331568  841431 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18277-781196/.minikube/ca.key
	I0316 18:17:19.331608  841431 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18277-781196/.minikube/proxy-client-ca.key
	I0316 18:17:19.331616  841431 certs.go:256] generating profile certs ...
	I0316 18:17:19.331738  841431 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/newest-cni-993416/client.key
	I0316 18:17:19.331835  841431 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/newest-cni-993416/apiserver.key.6606b315
	I0316 18:17:19.331885  841431 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/newest-cni-993416/proxy-client.key
	I0316 18:17:19.331987  841431 certs.go:484] found cert: /home/jenkins/minikube-integration/18277-781196/.minikube/certs/788442.pem (1338 bytes)
	W0316 18:17:19.332021  841431 certs.go:480] ignoring /home/jenkins/minikube-integration/18277-781196/.minikube/certs/788442_empty.pem, impossibly tiny 0 bytes
	I0316 18:17:19.332029  841431 certs.go:484] found cert: /home/jenkins/minikube-integration/18277-781196/.minikube/certs/ca-key.pem (1679 bytes)
	I0316 18:17:19.332050  841431 certs.go:484] found cert: /home/jenkins/minikube-integration/18277-781196/.minikube/certs/ca.pem (1082 bytes)
	I0316 18:17:19.332074  841431 certs.go:484] found cert: /home/jenkins/minikube-integration/18277-781196/.minikube/certs/cert.pem (1123 bytes)
	I0316 18:17:19.332101  841431 certs.go:484] found cert: /home/jenkins/minikube-integration/18277-781196/.minikube/certs/key.pem (1675 bytes)
	I0316 18:17:19.332138  841431 certs.go:484] found cert: /home/jenkins/minikube-integration/18277-781196/.minikube/files/etc/ssl/certs/7884422.pem (1708 bytes)
	I0316 18:17:19.332941  841431 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18277-781196/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0316 18:17:19.371244  841431 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18277-781196/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0316 18:17:19.412285  841431 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18277-781196/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0316 18:17:19.450101  841431 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18277-781196/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0316 18:17:19.485371  841431 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/newest-cni-993416/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0316 18:17:19.521337  841431 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/newest-cni-993416/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0316 18:17:19.560592  841431 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/newest-cni-993416/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0316 18:17:19.597429  841431 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/newest-cni-993416/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0316 18:17:19.631736  841431 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18277-781196/.minikube/files/etc/ssl/certs/7884422.pem --> /usr/share/ca-certificates/7884422.pem (1708 bytes)
	I0316 18:17:19.662038  841431 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18277-781196/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0316 18:17:19.693854  841431 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18277-781196/.minikube/certs/788442.pem --> /usr/share/ca-certificates/788442.pem (1338 bytes)
	I0316 18:17:19.726417  841431 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0316 18:17:19.749016  841431 ssh_runner.go:195] Run: openssl version
	I0316 18:17:19.756280  841431 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7884422.pem && ln -fs /usr/share/ca-certificates/7884422.pem /etc/ssl/certs/7884422.pem"
	I0316 18:17:19.771479  841431 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7884422.pem
	I0316 18:17:19.777588  841431 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 16 17:02 /usr/share/ca-certificates/7884422.pem
	I0316 18:17:19.777667  841431 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7884422.pem
	I0316 18:17:19.785507  841431 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7884422.pem /etc/ssl/certs/3ec20f2e.0"
	I0316 18:17:19.802306  841431 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0316 18:17:19.818636  841431 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0316 18:17:19.825230  841431 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 16 16:56 /usr/share/ca-certificates/minikubeCA.pem
	I0316 18:17:19.825307  841431 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0316 18:17:19.832744  841431 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0316 18:17:19.847571  841431 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/788442.pem && ln -fs /usr/share/ca-certificates/788442.pem /etc/ssl/certs/788442.pem"
	I0316 18:17:19.862872  841431 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/788442.pem
	I0316 18:17:19.869402  841431 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 16 17:02 /usr/share/ca-certificates/788442.pem
	I0316 18:17:19.869490  841431 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/788442.pem
	I0316 18:17:19.876895  841431 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/788442.pem /etc/ssl/certs/51391683.0"
	I0316 18:17:19.892130  841431 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0316 18:17:19.898268  841431 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0316 18:17:19.905980  841431 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0316 18:17:19.913801  841431 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0316 18:17:19.921756  841431 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0316 18:17:19.930123  841431 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0316 18:17:19.938266  841431 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0316 18:17:19.946303  841431 kubeadm.go:391] StartCluster: {Name:newest-cni-993416 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29
.0-rc.2 ClusterName:newest-cni-993416 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.228 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true
] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0316 18:17:19.946404  841431 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0316 18:17:19.946466  841431 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0316 18:17:19.998436  841431 cri.go:89] found id: "a71833ecc67de27b7e2cff17605a6f58252d6af66e62ca9d2e011e312ba56200"
	I0316 18:17:19.998471  841431 cri.go:89] found id: "69aa0a81debd0b781ccced3e81eb778e7a148b96b1bde020d132d5e5684a75f5"
	I0316 18:17:19.998478  841431 cri.go:89] found id: "0edf488fb1cbfad331fbc504372cd2726a4af55918a333176a2c7e1487eda0b3"
	I0316 18:17:19.998483  841431 cri.go:89] found id: "e091404a6139a0f992e59474c4c3d5acaea8d175b13b3704508458556f16aef6"
	I0316 18:17:19.998496  841431 cri.go:89] found id: "761688729782830a759f339f0603d5276a117c549be3230363a12e289e688a01"
	I0316 18:17:19.998505  841431 cri.go:89] found id: "d404131e07cded2bda65abc2bc08661a3f501956c1431d91b53ef2d61bdc6ff7"
	I0316 18:17:19.998508  841431 cri.go:89] found id: "3f2ee94758eaa4175186f378d85fe346d2fc5f3ea161a1325e1d24593be3d5bc"
	I0316 18:17:19.998513  841431 cri.go:89] found id: "6f27aa35bed1c441bf6062b1ab25c5cf18e127bd3891d160dd7c26c0f29af1f2"
	I0316 18:17:19.998517  841431 cri.go:89] found id: ""
	I0316 18:17:19.998571  841431 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0316 18:17:20.016557  841431 cri.go:116] JSON = null
	W0316 18:17:20.016625  841431 kubeadm.go:398] unpause failed: list paused: list returned 0 containers, but ps returned 8
	I0316 18:17:20.016712  841431 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0316 18:17:20.030189  841431 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0316 18:17:20.030216  841431 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0316 18:17:20.030221  841431 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0316 18:17:20.030266  841431 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0316 18:17:20.043013  841431 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0316 18:17:20.043748  841431 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-993416" does not appear in /home/jenkins/minikube-integration/18277-781196/kubeconfig
	I0316 18:17:20.044051  841431 kubeconfig.go:62] /home/jenkins/minikube-integration/18277-781196/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-993416" cluster setting kubeconfig missing "newest-cni-993416" context setting]
	I0316 18:17:20.044591  841431 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18277-781196/kubeconfig: {Name:mke76908283b58e263a226954335fd60fd02692a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0316 18:17:20.046076  841431 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0316 18:17:20.059175  841431 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.228
	I0316 18:17:20.059227  841431 kubeadm.go:1154] stopping kube-system containers ...
	I0316 18:17:20.059243  841431 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0316 18:17:20.059329  841431 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0316 18:17:20.103617  841431 cri.go:89] found id: "a71833ecc67de27b7e2cff17605a6f58252d6af66e62ca9d2e011e312ba56200"
	I0316 18:17:20.103643  841431 cri.go:89] found id: "69aa0a81debd0b781ccced3e81eb778e7a148b96b1bde020d132d5e5684a75f5"
	I0316 18:17:20.103647  841431 cri.go:89] found id: "0edf488fb1cbfad331fbc504372cd2726a4af55918a333176a2c7e1487eda0b3"
	I0316 18:17:20.103650  841431 cri.go:89] found id: "e091404a6139a0f992e59474c4c3d5acaea8d175b13b3704508458556f16aef6"
	I0316 18:17:20.103653  841431 cri.go:89] found id: "761688729782830a759f339f0603d5276a117c549be3230363a12e289e688a01"
	I0316 18:17:20.103657  841431 cri.go:89] found id: "d404131e07cded2bda65abc2bc08661a3f501956c1431d91b53ef2d61bdc6ff7"
	I0316 18:17:20.103660  841431 cri.go:89] found id: "3f2ee94758eaa4175186f378d85fe346d2fc5f3ea161a1325e1d24593be3d5bc"
	I0316 18:17:20.103664  841431 cri.go:89] found id: "6f27aa35bed1c441bf6062b1ab25c5cf18e127bd3891d160dd7c26c0f29af1f2"
	I0316 18:17:20.103668  841431 cri.go:89] found id: ""
	I0316 18:17:20.103677  841431 cri.go:234] Stopping containers: [a71833ecc67de27b7e2cff17605a6f58252d6af66e62ca9d2e011e312ba56200 69aa0a81debd0b781ccced3e81eb778e7a148b96b1bde020d132d5e5684a75f5 0edf488fb1cbfad331fbc504372cd2726a4af55918a333176a2c7e1487eda0b3 e091404a6139a0f992e59474c4c3d5acaea8d175b13b3704508458556f16aef6 761688729782830a759f339f0603d5276a117c549be3230363a12e289e688a01 d404131e07cded2bda65abc2bc08661a3f501956c1431d91b53ef2d61bdc6ff7 3f2ee94758eaa4175186f378d85fe346d2fc5f3ea161a1325e1d24593be3d5bc 6f27aa35bed1c441bf6062b1ab25c5cf18e127bd3891d160dd7c26c0f29af1f2]
	I0316 18:17:20.103748  841431 ssh_runner.go:195] Run: which crictl
	I0316 18:17:20.109013  841431 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop --timeout=10 a71833ecc67de27b7e2cff17605a6f58252d6af66e62ca9d2e011e312ba56200 69aa0a81debd0b781ccced3e81eb778e7a148b96b1bde020d132d5e5684a75f5 0edf488fb1cbfad331fbc504372cd2726a4af55918a333176a2c7e1487eda0b3 e091404a6139a0f992e59474c4c3d5acaea8d175b13b3704508458556f16aef6 761688729782830a759f339f0603d5276a117c549be3230363a12e289e688a01 d404131e07cded2bda65abc2bc08661a3f501956c1431d91b53ef2d61bdc6ff7 3f2ee94758eaa4175186f378d85fe346d2fc5f3ea161a1325e1d24593be3d5bc 6f27aa35bed1c441bf6062b1ab25c5cf18e127bd3891d160dd7c26c0f29af1f2
	I0316 18:17:20.154788  841431 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0316 18:17:20.173228  841431 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0316 18:17:20.185106  841431 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0316 18:17:20.185133  841431 kubeadm.go:156] found existing configuration files:
	
	I0316 18:17:20.185190  841431 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0316 18:17:20.196457  841431 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0316 18:17:20.196535  841431 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0316 18:17:20.208090  841431 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0316 18:17:20.219476  841431 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0316 18:17:20.219594  841431 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0316 18:17:20.231087  841431 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0316 18:17:20.242471  841431 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0316 18:17:20.242539  841431 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0316 18:17:20.254512  841431 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0316 18:17:20.266221  841431 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0316 18:17:20.266313  841431 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0316 18:17:20.278335  841431 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0316 18:17:20.291364  841431 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0316 18:17:20.441748  841431 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0316 18:17:21.552425  841431 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.110633969s)
	I0316 18:17:21.552480  841431 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0316 18:17:21.787500  841431 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0316 18:17:21.883417  841431 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0316 18:17:21.979379  841431 api_server.go:52] waiting for apiserver process to appear ...
	I0316 18:17:21.979505  841431 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 18:17:22.479612  841431 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 18:17:22.980465  841431 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 18:17:21.019491  838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
	I0316 18:17:23.021112  838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
	I0316 18:17:23.480359  841431 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 18:17:23.512208  841431 api_server.go:72] duration metric: took 1.53285958s to wait for apiserver process to appear ...
	I0316 18:17:23.512244  841431 api_server.go:88] waiting for apiserver healthz status ...
	I0316 18:17:23.512269  841431 api_server.go:253] Checking apiserver healthz at https://192.168.72.228:8443/healthz ...
	I0316 18:17:23.512848  841431 api_server.go:269] stopped: https://192.168.72.228:8443/healthz: Get "https://192.168.72.228:8443/healthz": dial tcp 192.168.72.228:8443: connect: connection refused
	I0316 18:17:24.012400  841431 api_server.go:253] Checking apiserver healthz at https://192.168.72.228:8443/healthz ...
	I0316 18:17:26.387879  841431 api_server.go:279] https://192.168.72.228:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0316 18:17:26.387946  841431 api_server.go:103] status: https://192.168.72.228:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0316 18:17:26.387968  841431 api_server.go:253] Checking apiserver healthz at https://192.168.72.228:8443/healthz ...
	I0316 18:17:26.417506  841431 api_server.go:279] https://192.168.72.228:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0316 18:17:26.417545  841431 api_server.go:103] status: https://192.168.72.228:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0316 18:17:26.512809  841431 api_server.go:253] Checking apiserver healthz at https://192.168.72.228:8443/healthz ...
	I0316 18:17:26.525228  841431 api_server.go:279] https://192.168.72.228:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0316 18:17:26.525276  841431 api_server.go:103] status: https://192.168.72.228:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0316 18:17:27.012795  841431 api_server.go:253] Checking apiserver healthz at https://192.168.72.228:8443/healthz ...
	I0316 18:17:27.024678  841431 api_server.go:279] https://192.168.72.228:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0316 18:17:27.024722  841431 api_server.go:103] status: https://192.168.72.228:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0316 18:17:27.513345  841431 api_server.go:253] Checking apiserver healthz at https://192.168.72.228:8443/healthz ...
	I0316 18:17:27.530929  841431 api_server.go:279] https://192.168.72.228:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0316 18:17:27.530980  841431 api_server.go:103] status: https://192.168.72.228:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0316 18:17:28.012475  841431 api_server.go:253] Checking apiserver healthz at https://192.168.72.228:8443/healthz ...
	I0316 18:17:28.017944  841431 api_server.go:279] https://192.168.72.228:8443/healthz returned 200:
	ok
	I0316 18:17:28.025825  841431 api_server.go:141] control plane version: v1.29.0-rc.2
	I0316 18:17:28.025883  841431 api_server.go:131] duration metric: took 4.513628784s to wait for apiserver health ...
	I0316 18:17:28.025897  841431 cni.go:84] Creating CNI manager for ""
	I0316 18:17:28.025907  841431 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0316 18:17:28.027996  841431 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0316 18:17:28.029481  841431 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0316 18:17:28.042768  841431 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0316 18:17:28.073448  841431 system_pods.go:43] waiting for kube-system pods to appear ...
	I0316 18:17:28.085932  841431 system_pods.go:59] 9 kube-system pods found
	I0316 18:17:28.085981  841431 system_pods.go:61] "coredns-76f75df574-hkkkh" [efd50172-4179-4235-adcf-2cc14383680d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0316 18:17:28.085991  841431 system_pods.go:61] "coredns-76f75df574-rhrkz" [3f5fe20f-4f2b-4dad-ab54-c00261ce77fb] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0316 18:17:28.086002  841431 system_pods.go:61] "etcd-newest-cni-993416" [f9d9e16d-4c48-41ef-954d-84b2adc1d678] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0316 18:17:28.086021  841431 system_pods.go:61] "kube-apiserver-newest-cni-993416" [b745c8a8-8c3a-48a8-8884-8952190b871e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0316 18:17:28.086032  841431 system_pods.go:61] "kube-controller-manager-newest-cni-993416" [d0879001-bfc2-4268-a421-9257bc6155cc] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0316 18:17:28.086041  841431 system_pods.go:61] "kube-proxy-lbfnv" [4269401d-14f7-4752-a7df-ec3f9da042d0] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0316 18:17:28.086055  841431 system_pods.go:61] "kube-scheduler-newest-cni-993416" [53741680-de3a-449b-ab2b-a520bc8c2c54] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0316 18:17:28.086067  841431 system_pods.go:61] "metrics-server-57f55c9bc5-rbrmj" [3eabea78-4346-49ea-ada5-72c98a6daa7d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0316 18:17:28.086081  841431 system_pods.go:61] "storage-provisioner" [0d551c52-212b-4b92-9b76-e1034e2d8d0b] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0316 18:17:28.086098  841431 system_pods.go:74] duration metric: took 12.609767ms to wait for pod list to return data ...
	I0316 18:17:28.086110  841431 node_conditions.go:102] verifying NodePressure condition ...
	I0316 18:17:28.095367  841431 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0316 18:17:28.095404  841431 node_conditions.go:123] node cpu capacity is 2
	I0316 18:17:28.095470  841431 node_conditions.go:105] duration metric: took 9.349036ms to run NodePressure ...
	I0316 18:17:28.095509  841431 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0316 18:17:28.403986  841431 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0316 18:17:28.424295  841431 ops.go:34] apiserver oom_adj: -16
	I0316 18:17:28.424329  841431 kubeadm.go:591] duration metric: took 8.394102538s to restartPrimaryControlPlane
	I0316 18:17:28.424343  841431 kubeadm.go:393] duration metric: took 8.478062582s to StartCluster
	I0316 18:17:28.424368  841431 settings.go:142] acquiring lock: {Name:mk5e1e3433840176063e5baa5db7056716046a6c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0316 18:17:28.424472  841431 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18277-781196/kubeconfig
	I0316 18:17:28.425801  841431 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18277-781196/kubeconfig: {Name:mke76908283b58e263a226954335fd60fd02692a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0316 18:17:28.426202  841431 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.72.228 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0316 18:17:28.427700  841431 out.go:177] * Verifying Kubernetes components...
	I0316 18:17:28.426291  841431 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0316 18:17:28.426509  841431 config.go:182] Loaded profile config "newest-cni-993416": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.29.0-rc.2
	I0316 18:17:28.429281  841431 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0316 18:17:28.427842  841431 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-993416"
	I0316 18:17:28.429391  841431 addons.go:234] Setting addon storage-provisioner=true in "newest-cni-993416"
	W0316 18:17:28.429410  841431 addons.go:243] addon storage-provisioner should already be in state true
	I0316 18:17:28.427844  841431 addons.go:69] Setting dashboard=true in profile "newest-cni-993416"
	I0316 18:17:28.429450  841431 host.go:66] Checking if "newest-cni-993416" exists ...
	I0316 18:17:28.429469  841431 addons.go:234] Setting addon dashboard=true in "newest-cni-993416"
	W0316 18:17:28.429481  841431 addons.go:243] addon dashboard should already be in state true
	I0316 18:17:28.429509  841431 host.go:66] Checking if "newest-cni-993416" exists ...
	I0316 18:17:28.427858  841431 addons.go:69] Setting default-storageclass=true in profile "newest-cni-993416"
	I0316 18:17:28.429616  841431 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-993416"
	I0316 18:17:28.427872  841431 addons.go:69] Setting metrics-server=true in profile "newest-cni-993416"
	I0316 18:17:28.429723  841431 addons.go:234] Setting addon metrics-server=true in "newest-cni-993416"
	W0316 18:17:28.429738  841431 addons.go:243] addon metrics-server should already be in state true
	I0316 18:17:28.429777  841431 host.go:66] Checking if "newest-cni-993416" exists ...
	I0316 18:17:28.429889  841431 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0316 18:17:28.429936  841431 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 18:17:28.429953  841431 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0316 18:17:28.429996  841431 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 18:17:28.430042  841431 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0316 18:17:28.430073  841431 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 18:17:28.430169  841431 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0316 18:17:28.430210  841431 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 18:17:28.447013  841431 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43297
	I0316 18:17:28.447559  841431 main.go:141] libmachine: () Calling .GetVersion
	I0316 18:17:28.448208  841431 main.go:141] libmachine: Using API Version  1
	I0316 18:17:28.448238  841431 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 18:17:28.448677  841431 main.go:141] libmachine: () Calling .GetMachineName
	I0316 18:17:28.449343  841431 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0316 18:17:28.449398  841431 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 18:17:28.451831  841431 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44269
	I0316 18:17:28.451847  841431 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43153
	I0316 18:17:28.452339  841431 main.go:141] libmachine: () Calling .GetVersion
	I0316 18:17:28.452533  841431 main.go:141] libmachine: () Calling .GetVersion
	I0316 18:17:28.453149  841431 main.go:141] libmachine: Using API Version  1
	I0316 18:17:28.453169  841431 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 18:17:28.453289  841431 main.go:141] libmachine: Using API Version  1
	I0316 18:17:28.453307  841431 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 18:17:28.453621  841431 main.go:141] libmachine: () Calling .GetMachineName
	I0316 18:17:28.453815  841431 main.go:141] libmachine: () Calling .GetMachineName
	I0316 18:17:28.454315  841431 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0316 18:17:28.454370  841431 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 18:17:28.454605  841431 main.go:141] libmachine: (newest-cni-993416) Calling .GetState
	I0316 18:17:28.455348  841431 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37275
	I0316 18:17:28.456170  841431 main.go:141] libmachine: () Calling .GetVersion
	I0316 18:17:28.456672  841431 main.go:141] libmachine: Using API Version  1
	I0316 18:17:28.456692  841431 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 18:17:28.457050  841431 main.go:141] libmachine: () Calling .GetMachineName
	I0316 18:17:28.457637  841431 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0316 18:17:28.457695  841431 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 18:17:28.458326  841431 addons.go:234] Setting addon default-storageclass=true in "newest-cni-993416"
	W0316 18:17:28.458344  841431 addons.go:243] addon default-storageclass should already be in state true
	I0316 18:17:28.458374  841431 host.go:66] Checking if "newest-cni-993416" exists ...
	I0316 18:17:28.458734  841431 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0316 18:17:28.458778  841431 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 18:17:28.471779  841431 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37023
	I0316 18:17:28.471775  841431 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37055
	I0316 18:17:28.472290  841431 main.go:141] libmachine: () Calling .GetVersion
	I0316 18:17:28.472402  841431 main.go:141] libmachine: () Calling .GetVersion
	I0316 18:17:28.472843  841431 main.go:141] libmachine: Using API Version  1
	I0316 18:17:28.472868  841431 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 18:17:28.472994  841431 main.go:141] libmachine: Using API Version  1
	I0316 18:17:28.473017  841431 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 18:17:28.473334  841431 main.go:141] libmachine: () Calling .GetMachineName
	I0316 18:17:28.473346  841431 main.go:141] libmachine: () Calling .GetMachineName
	I0316 18:17:28.473512  841431 main.go:141] libmachine: (newest-cni-993416) Calling .GetState
	I0316 18:17:28.473688  841431 main.go:141] libmachine: (newest-cni-993416) Calling .GetState
	I0316 18:17:28.475749  841431 main.go:141] libmachine: (newest-cni-993416) Calling .DriverName
	I0316 18:17:28.478042  841431 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0316 18:17:28.476260  841431 main.go:141] libmachine: (newest-cni-993416) Calling .DriverName
	I0316 18:17:28.479470  841431 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0316 18:17:28.479493  841431 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0316 18:17:28.479525  841431 main.go:141] libmachine: (newest-cni-993416) Calling .GetSSHHostname
	I0316 18:17:28.481120  841431 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0316 18:17:28.481537  841431 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42525
	I0316 18:17:28.482639  841431 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0316 18:17:28.484048  841431 main.go:141] libmachine: (newest-cni-993416) Calling .GetSSHPort
	I0316 18:17:28.483378  841431 main.go:141] libmachine: () Calling .GetVersion
	I0316 18:17:28.484125  841431 addons.go:426] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0316 18:17:28.484141  841431 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0316 18:17:28.484243  841431 main.go:141] libmachine: (newest-cni-993416) Calling .GetSSHHostname
	I0316 18:17:28.483617  841431 main.go:141] libmachine: (newest-cni-993416) DBG | domain newest-cni-993416 has defined MAC address 52:54:00:73:0d:0a in network mk-newest-cni-993416
	I0316 18:17:28.484315  841431 main.go:141] libmachine: (newest-cni-993416) Calling .GetSSHKeyPath
	I0316 18:17:28.484341  841431 main.go:141] libmachine: (newest-cni-993416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:0d:0a", ip: ""} in network mk-newest-cni-993416: {Iface:virbr4 ExpiryTime:2024-03-16 19:17:06 +0000 UTC Type:0 Mac:52:54:00:73:0d:0a Iaid: IPaddr:192.168.72.228 Prefix:24 Hostname:newest-cni-993416 Clientid:01:52:54:00:73:0d:0a}
	I0316 18:17:28.484373  841431 main.go:141] libmachine: (newest-cni-993416) DBG | domain newest-cni-993416 has defined IP address 192.168.72.228 and MAC address 52:54:00:73:0d:0a in network mk-newest-cni-993416
	I0316 18:17:28.484491  841431 main.go:141] libmachine: (newest-cni-993416) Calling .GetSSHUsername
	I0316 18:17:28.484689  841431 sshutil.go:53] new ssh client: &{IP:192.168.72.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18277-781196/.minikube/machines/newest-cni-993416/id_rsa Username:docker}
	I0316 18:17:28.485783  841431 main.go:141] libmachine: Using API Version  1
	I0316 18:17:28.485810  841431 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 18:17:28.486348  841431 main.go:141] libmachine: () Calling .GetMachineName
	I0316 18:17:28.487057  841431 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0316 18:17:28.487114  841431 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 18:17:28.487350  841431 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37705
	I0316 18:17:28.487634  841431 main.go:141] libmachine: (newest-cni-993416) DBG | domain newest-cni-993416 has defined MAC address 52:54:00:73:0d:0a in network mk-newest-cni-993416
	I0316 18:17:28.487833  841431 main.go:141] libmachine: () Calling .GetVersion
	I0316 18:17:28.488082  841431 main.go:141] libmachine: (newest-cni-993416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:0d:0a", ip: ""} in network mk-newest-cni-993416: {Iface:virbr4 ExpiryTime:2024-03-16 19:17:06 +0000 UTC Type:0 Mac:52:54:00:73:0d:0a Iaid: IPaddr:192.168.72.228 Prefix:24 Hostname:newest-cni-993416 Clientid:01:52:54:00:73:0d:0a}
	I0316 18:17:28.488108  841431 main.go:141] libmachine: (newest-cni-993416) DBG | domain newest-cni-993416 has defined IP address 192.168.72.228 and MAC address 52:54:00:73:0d:0a in network mk-newest-cni-993416
	I0316 18:17:28.488442  841431 main.go:141] libmachine: Using API Version  1
	I0316 18:17:28.488472  841431 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 18:17:28.488482  841431 main.go:141] libmachine: (newest-cni-993416) Calling .GetSSHPort
	I0316 18:17:28.488683  841431 main.go:141] libmachine: (newest-cni-993416) Calling .GetSSHKeyPath
	I0316 18:17:28.488860  841431 main.go:141] libmachine: () Calling .GetMachineName
	I0316 18:17:28.488898  841431 main.go:141] libmachine: (newest-cni-993416) Calling .GetSSHUsername
	I0316 18:17:28.489070  841431 sshutil.go:53] new ssh client: &{IP:192.168.72.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18277-781196/.minikube/machines/newest-cni-993416/id_rsa Username:docker}
	I0316 18:17:28.489173  841431 main.go:141] libmachine: (newest-cni-993416) Calling .GetState
	I0316 18:17:28.490917  841431 main.go:141] libmachine: (newest-cni-993416) Calling .DriverName
	I0316 18:17:28.493057  841431 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0316 18:17:25.519500  838136 pod_ready.go:102] pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace has status "Ready":"False"
	I0316 18:17:25.519540  838136 pod_ready.go:81] duration metric: took 4m0.007912771s for pod "metrics-server-9975d5f86-xqhk9" in "kube-system" namespace to be "Ready" ...
	E0316 18:17:25.519551  838136 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0316 18:17:25.519559  838136 pod_ready.go:38] duration metric: took 5m48.09067273s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0316 18:17:25.519577  838136 api_server.go:52] waiting for apiserver process to appear ...
	I0316 18:17:25.519614  838136 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0316 18:17:25.519725  838136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 18:17:25.587023  838136 cri.go:89] found id: "84cebb4cfc43d687983d6d41133a762dda43b9399298c00c44f46847e2f61438"
	I0316 18:17:25.587057  838136 cri.go:89] found id: ""
	I0316 18:17:25.587068  838136 logs.go:276] 1 containers: [84cebb4cfc43d687983d6d41133a762dda43b9399298c00c44f46847e2f61438]
	I0316 18:17:25.587136  838136 ssh_runner.go:195] Run: which crictl
	I0316 18:17:25.593870  838136 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0316 18:17:25.593959  838136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 18:17:25.644646  838136 cri.go:89] found id: "2434210f6c63bec8d2ba7076471915eb02d3219675ee8ac3cab9722cca4f03e9"
	I0316 18:17:25.644677  838136 cri.go:89] found id: ""
	I0316 18:17:25.644687  838136 logs.go:276] 1 containers: [2434210f6c63bec8d2ba7076471915eb02d3219675ee8ac3cab9722cca4f03e9]
	I0316 18:17:25.644751  838136 ssh_runner.go:195] Run: which crictl
	I0316 18:17:25.652161  838136 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0316 18:17:25.652231  838136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 18:17:25.712920  838136 cri.go:89] found id: "61efb30968d2bf3bd0aff15b70ec1a33c3654d61c5164cc2879e18ef21cd1b77"
	I0316 18:17:25.712955  838136 cri.go:89] found id: ""
	I0316 18:17:25.712967  838136 logs.go:276] 1 containers: [61efb30968d2bf3bd0aff15b70ec1a33c3654d61c5164cc2879e18ef21cd1b77]
	I0316 18:17:25.713041  838136 ssh_runner.go:195] Run: which crictl
	I0316 18:17:25.719028  838136 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0316 18:17:25.719136  838136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 18:17:25.773897  838136 cri.go:89] found id: "34b075a6e3dfea5f9806aeb9625651a26b0db86e59f277f6376fd8767fb23b0c"
	I0316 18:17:25.773927  838136 cri.go:89] found id: ""
	I0316 18:17:25.773937  838136 logs.go:276] 1 containers: [34b075a6e3dfea5f9806aeb9625651a26b0db86e59f277f6376fd8767fb23b0c]
	I0316 18:17:25.774002  838136 ssh_runner.go:195] Run: which crictl
	I0316 18:17:25.780138  838136 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0316 18:17:25.780246  838136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 18:17:25.843279  838136 cri.go:89] found id: "d73b58bba35328eea373a801852be747130c9844121cf55bd77643b3531047cd"
	I0316 18:17:25.843309  838136 cri.go:89] found id: ""
	I0316 18:17:25.843317  838136 logs.go:276] 1 containers: [d73b58bba35328eea373a801852be747130c9844121cf55bd77643b3531047cd]
	I0316 18:17:25.843375  838136 ssh_runner.go:195] Run: which crictl
	I0316 18:17:25.848956  838136 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 18:17:25.849060  838136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 18:17:25.899592  838136 cri.go:89] found id: "05061990c3ccf6f330cf21ba541a8be55fca74639e81e4b0d14b30bee51fc554"
	I0316 18:17:25.899624  838136 cri.go:89] found id: "162132fbe06feefe5047b9977675ebb65d90ca0056d9f9a9c6733dda273afd72"
	I0316 18:17:25.899630  838136 cri.go:89] found id: ""
	I0316 18:17:25.899641  838136 logs.go:276] 2 containers: [05061990c3ccf6f330cf21ba541a8be55fca74639e81e4b0d14b30bee51fc554 162132fbe06feefe5047b9977675ebb65d90ca0056d9f9a9c6733dda273afd72]
	I0316 18:17:25.899710  838136 ssh_runner.go:195] Run: which crictl
	I0316 18:17:25.907916  838136 ssh_runner.go:195] Run: which crictl
	I0316 18:17:25.918955  838136 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0316 18:17:25.919046  838136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 18:17:25.971433  838136 cri.go:89] found id: ""
	I0316 18:17:25.971478  838136 logs.go:276] 0 containers: []
	W0316 18:17:25.971490  838136 logs.go:278] No container was found matching "kindnet"
	I0316 18:17:25.971498  838136 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 18:17:25.971572  838136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 18:17:26.021187  838136 cri.go:89] found id: "aba262227c6f69883d13fafc927cfe64d82292e8029ae85f3213b3f2148c23e3"
	I0316 18:17:26.021220  838136 cri.go:89] found id: ""
	I0316 18:17:26.021229  838136 logs.go:276] 1 containers: [aba262227c6f69883d13fafc927cfe64d82292e8029ae85f3213b3f2148c23e3]
	I0316 18:17:26.021296  838136 ssh_runner.go:195] Run: which crictl
	I0316 18:17:26.028046  838136 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0316 18:17:26.028122  838136 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0316 18:17:26.086850  838136 cri.go:89] found id: "aa120a5aa0d886b8cd2c321b4b358ee6299f67e9b4a59d4782345a8088bff5c8"
	I0316 18:17:26.086875  838136 cri.go:89] found id: "7ed441150c7335e02b0c6b3ac696c632796c0d1229fc30b38f78d02560c87aa6"
	I0316 18:17:26.086879  838136 cri.go:89] found id: ""
	I0316 18:17:26.086887  838136 logs.go:276] 2 containers: [aa120a5aa0d886b8cd2c321b4b358ee6299f67e9b4a59d4782345a8088bff5c8 7ed441150c7335e02b0c6b3ac696c632796c0d1229fc30b38f78d02560c87aa6]
	I0316 18:17:26.086940  838136 ssh_runner.go:195] Run: which crictl
	I0316 18:17:26.093302  838136 ssh_runner.go:195] Run: which crictl
	I0316 18:17:26.101414  838136 logs.go:123] Gathering logs for etcd [2434210f6c63bec8d2ba7076471915eb02d3219675ee8ac3cab9722cca4f03e9] ...
	I0316 18:17:26.101443  838136 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2434210f6c63bec8d2ba7076471915eb02d3219675ee8ac3cab9722cca4f03e9"
	I0316 18:17:26.171632  838136 logs.go:123] Gathering logs for coredns [61efb30968d2bf3bd0aff15b70ec1a33c3654d61c5164cc2879e18ef21cd1b77] ...
	I0316 18:17:26.171697  838136 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 61efb30968d2bf3bd0aff15b70ec1a33c3654d61c5164cc2879e18ef21cd1b77"
	I0316 18:17:26.219764  838136 logs.go:123] Gathering logs for storage-provisioner [7ed441150c7335e02b0c6b3ac696c632796c0d1229fc30b38f78d02560c87aa6] ...
	I0316 18:17:26.219813  838136 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7ed441150c7335e02b0c6b3ac696c632796c0d1229fc30b38f78d02560c87aa6"
	I0316 18:17:26.281101  838136 logs.go:123] Gathering logs for describe nodes ...
	I0316 18:17:26.281153  838136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0316 18:17:26.484976  838136 logs.go:123] Gathering logs for kube-controller-manager [162132fbe06feefe5047b9977675ebb65d90ca0056d9f9a9c6733dda273afd72] ...
	I0316 18:17:26.485019  838136 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 162132fbe06feefe5047b9977675ebb65d90ca0056d9f9a9c6733dda273afd72"
	I0316 18:17:26.556929  838136 logs.go:123] Gathering logs for container status ...
	I0316 18:17:26.556977  838136 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 18:17:26.609552  838136 logs.go:123] Gathering logs for storage-provisioner [aa120a5aa0d886b8cd2c321b4b358ee6299f67e9b4a59d4782345a8088bff5c8] ...
	I0316 18:17:26.609594  838136 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aa120a5aa0d886b8cd2c321b4b358ee6299f67e9b4a59d4782345a8088bff5c8"
	I0316 18:17:26.656257  838136 logs.go:123] Gathering logs for kubelet ...
	I0316 18:17:26.656294  838136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0316 18:17:26.698787  838136 logs.go:138] Found kubelet problem: Mar 16 18:11:24 old-k8s-version-985498 kubelet[888]: E0316 18:11:24.452217     888 pod_workers.go:191] Error syncing pod f8d3d61ad8d45c80ab92bcedbe7fdb7d ("kube-controller-manager-old-k8s-version-985498_kube-system(f8d3d61ad8d45c80ab92bcedbe7fdb7d)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CreateContainerError: "failed to create containerd container: failed to rename: rename /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/new-210505493 /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/35: file exists"
	W0316 18:17:26.703383  838136 logs.go:138] Found kubelet problem: Mar 16 18:11:27 old-k8s-version-985498 kubelet[888]: E0316 18:11:27.530957     888 pod_workers.go:191] Error syncing pod 31a485c797dc9b239357ad3b694dc86e ("kube-apiserver-old-k8s-version-985498_kube-system(31a485c797dc9b239357ad3b694dc86e)"), skipping: failed to "StartContainer" for "kube-apiserver" with CreateContainerError: "failed to create containerd container: failed to rename: rename /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/new-3710715184 /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/36: file exists"
	W0316 18:17:26.705326  838136 logs.go:138] Found kubelet problem: Mar 16 18:11:29 old-k8s-version-985498 kubelet[888]: E0316 18:11:29.589592     888 pod_workers.go:191] Error syncing pod f8d3d61ad8d45c80ab92bcedbe7fdb7d ("kube-controller-manager-old-k8s-version-985498_kube-system(f8d3d61ad8d45c80ab92bcedbe7fdb7d)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-old-k8s-version-985498_kube-system(f8d3d61ad8d45c80ab92bcedbe7fdb7d)"
	W0316 18:17:26.708845  838136 logs.go:138] Found kubelet problem: Mar 16 18:11:33 old-k8s-version-985498 kubelet[888]: E0316 18:11:33.774758     888 pod_workers.go:191] Error syncing pod f8d3d61ad8d45c80ab92bcedbe7fdb7d ("kube-controller-manager-old-k8s-version-985498_kube-system(f8d3d61ad8d45c80ab92bcedbe7fdb7d)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-old-k8s-version-985498_kube-system(f8d3d61ad8d45c80ab92bcedbe7fdb7d)"
	W0316 18:17:26.713784  838136 logs.go:138] Found kubelet problem: Mar 16 18:11:34 old-k8s-version-985498 kubelet[888]: E0316 18:11:34.296039     888 pod_workers.go:191] Error syncing pod d89b271f-838a-4592-b128-fcb2a06fc5e9 ("storage-provisioner_kube-system(d89b271f-838a-4592-b128-fcb2a06fc5e9)"), skipping: failed to "StartContainer" for "storage-provisioner" with CreateContainerError: "failed to create containerd container: failed to rename: rename /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/new-1431217611 /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/38: file exists"
	W0316 18:17:26.719803  838136 logs.go:138] Found kubelet problem: Mar 16 18:11:37 old-k8s-version-985498 kubelet[888]: E0316 18:11:37.840851     888 pod_workers.go:191] Error syncing pod ba5c6fa2-191f-4ae2-8aee-b1075a50b37b ("metrics-server-9975d5f86-xqhk9_kube-system(ba5c6fa2-191f-4ae2-8aee-b1075a50b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host"
	W0316 18:17:26.719947  838136 logs.go:138] Found kubelet problem: Mar 16 18:11:38 old-k8s-version-985498 kubelet[888]: E0316 18:11:38.487672     888 pod_workers.go:191] Error syncing pod ba5c6fa2-191f-4ae2-8aee-b1075a50b37b ("metrics-server-9975d5f86-xqhk9_kube-system(ba5c6fa2-191f-4ae2-8aee-b1075a50b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0316 18:17:26.721883  838136 logs.go:138] Found kubelet problem: Mar 16 18:11:48 old-k8s-version-985498 kubelet[888]: E0316 18:11:48.375825     888 pod_workers.go:191] Error syncing pod f8d3d61ad8d45c80ab92bcedbe7fdb7d ("kube-controller-manager-old-k8s-version-985498_kube-system(f8d3d61ad8d45c80ab92bcedbe7fdb7d)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CreateContainerError: "failed to create containerd container: failed to rename: rename /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/new-1993581407 /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/41: file exists"
	W0316 18:17:26.723186  838136 logs.go:138] Found kubelet problem: Mar 16 18:11:48 old-k8s-version-985498 kubelet[888]: E0316 18:11:48.539670     888 pod_workers.go:191] Error syncing pod daf8607f-2ff3-4d80-b1ed-ca2d24cb6b36 ("kube-proxy-nvd4k_kube-system(daf8607f-2ff3-4d80-b1ed-ca2d24cb6b36)"), skipping: failed to "StartContainer" for "kube-proxy" with CreateContainerError: "failed to create containerd container: failed to rename: rename /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/new-2911645386 /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/42: file exists"
	W0316 18:17:26.725902  838136 logs.go:138] Found kubelet problem: Mar 16 18:11:50 old-k8s-version-985498 kubelet[888]: E0316 18:11:50.493127     888 pod_workers.go:191] Error syncing pod ba5c6fa2-191f-4ae2-8aee-b1075a50b37b ("metrics-server-9975d5f86-xqhk9_kube-system(ba5c6fa2-191f-4ae2-8aee-b1075a50b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host"
	W0316 18:17:26.727816  838136 logs.go:138] Found kubelet problem: Mar 16 18:12:01 old-k8s-version-985498 kubelet[888]: E0316 18:12:01.388860     888 pod_workers.go:191] Error syncing pod f8d3d61ad8d45c80ab92bcedbe7fdb7d ("kube-controller-manager-old-k8s-version-985498_kube-system(f8d3d61ad8d45c80ab92bcedbe7fdb7d)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CreateContainerError: "failed to create containerd container: failed to rename: rename /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/new-2375308116 /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/44: file exists"
	W0316 18:17:26.727957  838136 logs.go:138] Found kubelet problem: Mar 16 18:12:02 old-k8s-version-985498 kubelet[888]: E0316 18:12:02.347425     888 pod_workers.go:191] Error syncing pod ba5c6fa2-191f-4ae2-8aee-b1075a50b37b ("metrics-server-9975d5f86-xqhk9_kube-system(ba5c6fa2-191f-4ae2-8aee-b1075a50b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0316 18:17:26.729296  838136 logs.go:138] Found kubelet problem: Mar 16 18:12:04 old-k8s-version-985498 kubelet[888]: E0316 18:12:04.759315     888 pod_workers.go:191] Error syncing pod 9d1a1153-d964-4893-aae0-6b926755edf4 ("busybox_default(9d1a1153-d964-4893-aae0-6b926755edf4)"), skipping: failed to "StartContainer" for "busybox" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\": failed to prepare extraction snapshot \"extract-753167480-EI9m sha256:e49dd1e534d9df22f1c5041581eaeb3f23fc6ef51ac5a4963ab35adc8f056f65\": failed to rename: rename /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/new-2174206111 /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/45: file exists"
	W0316 18:17:26.729513  838136 logs.go:138] Found kubelet problem: Mar 16 18:12:05 old-k8s-version-985498 kubelet[888]: E0316 18:12:05.583630     888 pod_workers.go:191] Error syncing pod 9d1a1153-d964-4893-aae0-6b926755edf4 ("busybox_default(9d1a1153-d964-4893-aae0-6b926755edf4)"), skipping: failed to "StartContainer" for "busybox" with ImagePullBackOff: "Back-off pulling image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	W0316 18:17:26.731335  838136 logs.go:138] Found kubelet problem: Mar 16 18:12:17 old-k8s-version-985498 kubelet[888]: E0316 18:12:17.365731     888 pod_workers.go:191] Error syncing pod ba5c6fa2-191f-4ae2-8aee-b1075a50b37b ("metrics-server-9975d5f86-xqhk9_kube-system(ba5c6fa2-191f-4ae2-8aee-b1075a50b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host"
	W0316 18:17:26.732305  838136 logs.go:138] Found kubelet problem: Mar 16 18:12:31 old-k8s-version-985498 kubelet[888]: E0316 18:12:31.362316     888 pod_workers.go:191] Error syncing pod ba5c6fa2-191f-4ae2-8aee-b1075a50b37b ("metrics-server-9975d5f86-xqhk9_kube-system(ba5c6fa2-191f-4ae2-8aee-b1075a50b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0316 18:17:26.732729  838136 logs.go:138] Found kubelet problem: Mar 16 18:12:38 old-k8s-version-985498 kubelet[888]: E0316 18:12:38.782628     888 pod_workers.go:191] Error syncing pod c592d222-aa6b-4b6a-ad65-450b94be4b65 ("dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"
	W0316 18:17:26.732969  838136 logs.go:138] Found kubelet problem: Mar 16 18:12:39 old-k8s-version-985498 kubelet[888]: E0316 18:12:39.791862     888 pod_workers.go:191] Error syncing pod c592d222-aa6b-4b6a-ad65-450b94be4b65 ("dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"
	W0316 18:17:26.733111  838136 logs.go:138] Found kubelet problem: Mar 16 18:12:43 old-k8s-version-985498 kubelet[888]: E0316 18:12:43.348091     888 pod_workers.go:191] Error syncing pod ba5c6fa2-191f-4ae2-8aee-b1075a50b37b ("metrics-server-9975d5f86-xqhk9_kube-system(ba5c6fa2-191f-4ae2-8aee-b1075a50b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0316 18:17:26.733346  838136 logs.go:138] Found kubelet problem: Mar 16 18:12:46 old-k8s-version-985498 kubelet[888]: E0316 18:12:46.689033     888 pod_workers.go:191] Error syncing pod c592d222-aa6b-4b6a-ad65-450b94be4b65 ("dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"
	W0316 18:17:26.735058  838136 logs.go:138] Found kubelet problem: Mar 16 18:12:58 old-k8s-version-985498 kubelet[888]: E0316 18:12:58.404260     888 pod_workers.go:191] Error syncing pod ba5c6fa2-191f-4ae2-8aee-b1075a50b37b ("metrics-server-9975d5f86-xqhk9_kube-system(ba5c6fa2-191f-4ae2-8aee-b1075a50b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host"
	W0316 18:17:26.735490  838136 logs.go:138] Found kubelet problem: Mar 16 18:13:02 old-k8s-version-985498 kubelet[888]: E0316 18:13:02.883259     888 pod_workers.go:191] Error syncing pod c592d222-aa6b-4b6a-ad65-450b94be4b65 ("dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"
	W0316 18:17:26.735729  838136 logs.go:138] Found kubelet problem: Mar 16 18:13:06 old-k8s-version-985498 kubelet[888]: E0316 18:13:06.689066     888 pod_workers.go:191] Error syncing pod c592d222-aa6b-4b6a-ad65-450b94be4b65 ("dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"
	W0316 18:17:26.735866  838136 logs.go:138] Found kubelet problem: Mar 16 18:13:11 old-k8s-version-985498 kubelet[888]: E0316 18:13:11.347423     888 pod_workers.go:191] Error syncing pod ba5c6fa2-191f-4ae2-8aee-b1075a50b37b ("metrics-server-9975d5f86-xqhk9_kube-system(ba5c6fa2-191f-4ae2-8aee-b1075a50b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0316 18:17:26.736102  838136 logs.go:138] Found kubelet problem: Mar 16 18:13:20 old-k8s-version-985498 kubelet[888]: E0316 18:13:20.346818     888 pod_workers.go:191] Error syncing pod c592d222-aa6b-4b6a-ad65-450b94be4b65 ("dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"
	W0316 18:17:26.736237  838136 logs.go:138] Found kubelet problem: Mar 16 18:13:22 old-k8s-version-985498 kubelet[888]: E0316 18:13:22.349160     888 pod_workers.go:191] Error syncing pod ba5c6fa2-191f-4ae2-8aee-b1075a50b37b ("metrics-server-9975d5f86-xqhk9_kube-system(ba5c6fa2-191f-4ae2-8aee-b1075a50b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0316 18:17:26.736374  838136 logs.go:138] Found kubelet problem: Mar 16 18:13:34 old-k8s-version-985498 kubelet[888]: E0316 18:13:34.347075     888 pod_workers.go:191] Error syncing pod ba5c6fa2-191f-4ae2-8aee-b1075a50b37b ("metrics-server-9975d5f86-xqhk9_kube-system(ba5c6fa2-191f-4ae2-8aee-b1075a50b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0316 18:17:26.736801  838136 logs.go:138] Found kubelet problem: Mar 16 18:13:36 old-k8s-version-985498 kubelet[888]: E0316 18:13:36.006325     888 pod_workers.go:191] Error syncing pod c592d222-aa6b-4b6a-ad65-450b94be4b65 ("dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"
	W0316 18:17:26.737037  838136 logs.go:138] Found kubelet problem: Mar 16 18:13:37 old-k8s-version-985498 kubelet[888]: E0316 18:13:37.013902     888 pod_workers.go:191] Error syncing pod c592d222-aa6b-4b6a-ad65-450b94be4b65 ("dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"
	W0316 18:17:26.737173  838136 logs.go:138] Found kubelet problem: Mar 16 18:13:46 old-k8s-version-985498 kubelet[888]: E0316 18:13:46.347475     888 pod_workers.go:191] Error syncing pod ba5c6fa2-191f-4ae2-8aee-b1075a50b37b ("metrics-server-9975d5f86-xqhk9_kube-system(ba5c6fa2-191f-4ae2-8aee-b1075a50b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0316 18:17:26.737421  838136 logs.go:138] Found kubelet problem: Mar 16 18:13:51 old-k8s-version-985498 kubelet[888]: E0316 18:13:51.347194     888 pod_workers.go:191] Error syncing pod c592d222-aa6b-4b6a-ad65-450b94be4b65 ("dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"
	W0316 18:17:26.737556  838136 logs.go:138] Found kubelet problem: Mar 16 18:13:58 old-k8s-version-985498 kubelet[888]: E0316 18:13:58.348592     888 pod_workers.go:191] Error syncing pod ba5c6fa2-191f-4ae2-8aee-b1075a50b37b ("metrics-server-9975d5f86-xqhk9_kube-system(ba5c6fa2-191f-4ae2-8aee-b1075a50b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0316 18:17:26.737794  838136 logs.go:138] Found kubelet problem: Mar 16 18:14:03 old-k8s-version-985498 kubelet[888]: E0316 18:14:03.346460     888 pod_workers.go:191] Error syncing pod c592d222-aa6b-4b6a-ad65-450b94be4b65 ("dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"
	W0316 18:17:26.737933  838136 logs.go:138] Found kubelet problem: Mar 16 18:14:09 old-k8s-version-985498 kubelet[888]: E0316 18:14:09.347794     888 pod_workers.go:191] Error syncing pod ba5c6fa2-191f-4ae2-8aee-b1075a50b37b ("metrics-server-9975d5f86-xqhk9_kube-system(ba5c6fa2-191f-4ae2-8aee-b1075a50b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0316 18:17:26.738169  838136 logs.go:138] Found kubelet problem: Mar 16 18:14:15 old-k8s-version-985498 kubelet[888]: E0316 18:14:15.348212     888 pod_workers.go:191] Error syncing pod c592d222-aa6b-4b6a-ad65-450b94be4b65 ("dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"
	W0316 18:17:26.739915  838136 logs.go:138] Found kubelet problem: Mar 16 18:14:21 old-k8s-version-985498 kubelet[888]: E0316 18:14:21.360852     888 pod_workers.go:191] Error syncing pod ba5c6fa2-191f-4ae2-8aee-b1075a50b37b ("metrics-server-9975d5f86-xqhk9_kube-system(ba5c6fa2-191f-4ae2-8aee-b1075a50b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host"
	W0316 18:17:26.740357  838136 logs.go:138] Found kubelet problem: Mar 16 18:14:29 old-k8s-version-985498 kubelet[888]: E0316 18:14:29.175538     888 pod_workers.go:191] Error syncing pod c592d222-aa6b-4b6a-ad65-450b94be4b65 ("dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"
	W0316 18:17:26.740493  838136 logs.go:138] Found kubelet problem: Mar 16 18:14:32 old-k8s-version-985498 kubelet[888]: E0316 18:14:32.348500     888 pod_workers.go:191] Error syncing pod ba5c6fa2-191f-4ae2-8aee-b1075a50b37b ("metrics-server-9975d5f86-xqhk9_kube-system(ba5c6fa2-191f-4ae2-8aee-b1075a50b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0316 18:17:26.740728  838136 logs.go:138] Found kubelet problem: Mar 16 18:14:36 old-k8s-version-985498 kubelet[888]: E0316 18:14:36.689558     888 pod_workers.go:191] Error syncing pod c592d222-aa6b-4b6a-ad65-450b94be4b65 ("dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"
	W0316 18:17:26.740867  838136 logs.go:138] Found kubelet problem: Mar 16 18:14:46 old-k8s-version-985498 kubelet[888]: E0316 18:14:46.348058     888 pod_workers.go:191] Error syncing pod ba5c6fa2-191f-4ae2-8aee-b1075a50b37b ("metrics-server-9975d5f86-xqhk9_kube-system(ba5c6fa2-191f-4ae2-8aee-b1075a50b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0316 18:17:26.741102  838136 logs.go:138] Found kubelet problem: Mar 16 18:14:49 old-k8s-version-985498 kubelet[888]: E0316 18:14:49.347315     888 pod_workers.go:191] Error syncing pod c592d222-aa6b-4b6a-ad65-450b94be4b65 ("dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"
	W0316 18:17:26.741235  838136 logs.go:138] Found kubelet problem: Mar 16 18:14:57 old-k8s-version-985498 kubelet[888]: E0316 18:14:57.349480     888 pod_workers.go:191] Error syncing pod ba5c6fa2-191f-4ae2-8aee-b1075a50b37b ("metrics-server-9975d5f86-xqhk9_kube-system(ba5c6fa2-191f-4ae2-8aee-b1075a50b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0316 18:17:26.741471  838136 logs.go:138] Found kubelet problem: Mar 16 18:15:03 old-k8s-version-985498 kubelet[888]: E0316 18:15:03.346815     888 pod_workers.go:191] Error syncing pod c592d222-aa6b-4b6a-ad65-450b94be4b65 ("dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"
	W0316 18:17:26.741606  838136 logs.go:138] Found kubelet problem: Mar 16 18:15:10 old-k8s-version-985498 kubelet[888]: E0316 18:15:10.347187     888 pod_workers.go:191] Error syncing pod ba5c6fa2-191f-4ae2-8aee-b1075a50b37b ("metrics-server-9975d5f86-xqhk9_kube-system(ba5c6fa2-191f-4ae2-8aee-b1075a50b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0316 18:17:26.741845  838136 logs.go:138] Found kubelet problem: Mar 16 18:15:18 old-k8s-version-985498 kubelet[888]: E0316 18:15:18.346934     888 pod_workers.go:191] Error syncing pod c592d222-aa6b-4b6a-ad65-450b94be4b65 ("dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"
	W0316 18:17:26.741980  838136 logs.go:138] Found kubelet problem: Mar 16 18:15:25 old-k8s-version-985498 kubelet[888]: E0316 18:15:25.347491     888 pod_workers.go:191] Error syncing pod ba5c6fa2-191f-4ae2-8aee-b1075a50b37b ("metrics-server-9975d5f86-xqhk9_kube-system(ba5c6fa2-191f-4ae2-8aee-b1075a50b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0316 18:17:26.742249  838136 logs.go:138] Found kubelet problem: Mar 16 18:15:29 old-k8s-version-985498 kubelet[888]: E0316 18:15:29.347101     888 pod_workers.go:191] Error syncing pod c592d222-aa6b-4b6a-ad65-450b94be4b65 ("dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"
	W0316 18:17:26.742385  838136 logs.go:138] Found kubelet problem: Mar 16 18:15:39 old-k8s-version-985498 kubelet[888]: E0316 18:15:39.347176     888 pod_workers.go:191] Error syncing pod ba5c6fa2-191f-4ae2-8aee-b1075a50b37b ("metrics-server-9975d5f86-xqhk9_kube-system(ba5c6fa2-191f-4ae2-8aee-b1075a50b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0316 18:17:26.742620  838136 logs.go:138] Found kubelet problem: Mar 16 18:15:42 old-k8s-version-985498 kubelet[888]: E0316 18:15:42.347133     888 pod_workers.go:191] Error syncing pod c592d222-aa6b-4b6a-ad65-450b94be4b65 ("dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"
	W0316 18:17:26.742754  838136 logs.go:138] Found kubelet problem: Mar 16 18:15:50 old-k8s-version-985498 kubelet[888]: E0316 18:15:50.348255     888 pod_workers.go:191] Error syncing pod ba5c6fa2-191f-4ae2-8aee-b1075a50b37b ("metrics-server-9975d5f86-xqhk9_kube-system(ba5c6fa2-191f-4ae2-8aee-b1075a50b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0316 18:17:26.743180  838136 logs.go:138] Found kubelet problem: Mar 16 18:15:58 old-k8s-version-985498 kubelet[888]: E0316 18:15:58.519929     888 pod_workers.go:191] Error syncing pod c592d222-aa6b-4b6a-ad65-450b94be4b65 ("dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"
	W0316 18:17:26.743316  838136 logs.go:138] Found kubelet problem: Mar 16 18:16:03 old-k8s-version-985498 kubelet[888]: E0316 18:16:03.347044     888 pod_workers.go:191] Error syncing pod ba5c6fa2-191f-4ae2-8aee-b1075a50b37b ("metrics-server-9975d5f86-xqhk9_kube-system(ba5c6fa2-191f-4ae2-8aee-b1075a50b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0316 18:17:26.743562  838136 logs.go:138] Found kubelet problem: Mar 16 18:16:06 old-k8s-version-985498 kubelet[888]: E0316 18:16:06.689281     888 pod_workers.go:191] Error syncing pod c592d222-aa6b-4b6a-ad65-450b94be4b65 ("dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"
	W0316 18:17:26.743697  838136 logs.go:138] Found kubelet problem: Mar 16 18:16:17 old-k8s-version-985498 kubelet[888]: E0316 18:16:17.347194     888 pod_workers.go:191] Error syncing pod ba5c6fa2-191f-4ae2-8aee-b1075a50b37b ("metrics-server-9975d5f86-xqhk9_kube-system(ba5c6fa2-191f-4ae2-8aee-b1075a50b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0316 18:17:26.743937  838136 logs.go:138] Found kubelet problem: Mar 16 18:16:19 old-k8s-version-985498 kubelet[888]: E0316 18:16:19.346699     888 pod_workers.go:191] Error syncing pod c592d222-aa6b-4b6a-ad65-450b94be4b65 ("dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"
	W0316 18:17:26.744072  838136 logs.go:138] Found kubelet problem: Mar 16 18:16:30 old-k8s-version-985498 kubelet[888]: E0316 18:16:30.348163     888 pod_workers.go:191] Error syncing pod ba5c6fa2-191f-4ae2-8aee-b1075a50b37b ("metrics-server-9975d5f86-xqhk9_kube-system(ba5c6fa2-191f-4ae2-8aee-b1075a50b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0316 18:17:26.744308  838136 logs.go:138] Found kubelet problem: Mar 16 18:16:34 old-k8s-version-985498 kubelet[888]: E0316 18:16:34.346242     888 pod_workers.go:191] Error syncing pod c592d222-aa6b-4b6a-ad65-450b94be4b65 ("dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"
	W0316 18:17:26.744441  838136 logs.go:138] Found kubelet problem: Mar 16 18:16:41 old-k8s-version-985498 kubelet[888]: E0316 18:16:41.347306     888 pod_workers.go:191] Error syncing pod ba5c6fa2-191f-4ae2-8aee-b1075a50b37b ("metrics-server-9975d5f86-xqhk9_kube-system(ba5c6fa2-191f-4ae2-8aee-b1075a50b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0316 18:17:26.744677  838136 logs.go:138] Found kubelet problem: Mar 16 18:16:49 old-k8s-version-985498 kubelet[888]: E0316 18:16:49.347088     888 pod_workers.go:191] Error syncing pod c592d222-aa6b-4b6a-ad65-450b94be4b65 ("dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"
	W0316 18:17:26.744816  838136 logs.go:138] Found kubelet problem: Mar 16 18:16:56 old-k8s-version-985498 kubelet[888]: E0316 18:16:56.347531     888 pod_workers.go:191] Error syncing pod ba5c6fa2-191f-4ae2-8aee-b1075a50b37b ("metrics-server-9975d5f86-xqhk9_kube-system(ba5c6fa2-191f-4ae2-8aee-b1075a50b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0316 18:17:26.745050  838136 logs.go:138] Found kubelet problem: Mar 16 18:17:01 old-k8s-version-985498 kubelet[888]: E0316 18:17:01.346320     888 pod_workers.go:191] Error syncing pod c592d222-aa6b-4b6a-ad65-450b94be4b65 ("dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"
	W0316 18:17:26.746768  838136 logs.go:138] Found kubelet problem: Mar 16 18:17:08 old-k8s-version-985498 kubelet[888]: E0316 18:17:08.362954     888 pod_workers.go:191] Error syncing pod ba5c6fa2-191f-4ae2-8aee-b1075a50b37b ("metrics-server-9975d5f86-xqhk9_kube-system(ba5c6fa2-191f-4ae2-8aee-b1075a50b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host"
	W0316 18:17:26.747010  838136 logs.go:138] Found kubelet problem: Mar 16 18:17:16 old-k8s-version-985498 kubelet[888]: E0316 18:17:16.346879     888 pod_workers.go:191] Error syncing pod c592d222-aa6b-4b6a-ad65-450b94be4b65 ("dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"
	W0316 18:17:26.747145  838136 logs.go:138] Found kubelet problem: Mar 16 18:17:23 old-k8s-version-985498 kubelet[888]: E0316 18:17:23.347609     888 pod_workers.go:191] Error syncing pod ba5c6fa2-191f-4ae2-8aee-b1075a50b37b ("metrics-server-9975d5f86-xqhk9_kube-system(ba5c6fa2-191f-4ae2-8aee-b1075a50b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0316 18:17:26.747156  838136 logs.go:123] Gathering logs for dmesg ...
	I0316 18:17:26.747172  838136 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 18:17:26.766207  838136 logs.go:123] Gathering logs for kube-scheduler [34b075a6e3dfea5f9806aeb9625651a26b0db86e59f277f6376fd8767fb23b0c] ...
	I0316 18:17:26.766251  838136 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 34b075a6e3dfea5f9806aeb9625651a26b0db86e59f277f6376fd8767fb23b0c"
	I0316 18:17:26.823871  838136 logs.go:123] Gathering logs for kube-proxy [d73b58bba35328eea373a801852be747130c9844121cf55bd77643b3531047cd] ...
	I0316 18:17:26.823920  838136 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d73b58bba35328eea373a801852be747130c9844121cf55bd77643b3531047cd"
	I0316 18:17:26.870843  838136 logs.go:123] Gathering logs for kube-controller-manager [05061990c3ccf6f330cf21ba541a8be55fca74639e81e4b0d14b30bee51fc554] ...
	I0316 18:17:26.870883  838136 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 05061990c3ccf6f330cf21ba541a8be55fca74639e81e4b0d14b30bee51fc554"
	I0316 18:17:26.940409  838136 logs.go:123] Gathering logs for kubernetes-dashboard [aba262227c6f69883d13fafc927cfe64d82292e8029ae85f3213b3f2148c23e3] ...
	I0316 18:17:26.940460  838136 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aba262227c6f69883d13fafc927cfe64d82292e8029ae85f3213b3f2148c23e3"
	I0316 18:17:26.987147  838136 logs.go:123] Gathering logs for kube-apiserver [84cebb4cfc43d687983d6d41133a762dda43b9399298c00c44f46847e2f61438] ...
	I0316 18:17:26.987189  838136 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84cebb4cfc43d687983d6d41133a762dda43b9399298c00c44f46847e2f61438"
	I0316 18:17:27.062021  838136 logs.go:123] Gathering logs for containerd ...
	I0316 18:17:27.062071  838136 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0316 18:17:27.136063  838136 out.go:304] Setting ErrFile to fd 2...
	I0316 18:17:27.136101  838136 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0316 18:17:27.136179  838136 out.go:239] X Problems detected in kubelet:
	W0316 18:17:27.136198  838136 out.go:239]   Mar 16 18:16:56 old-k8s-version-985498 kubelet[888]: E0316 18:16:56.347531     888 pod_workers.go:191] Error syncing pod ba5c6fa2-191f-4ae2-8aee-b1075a50b37b ("metrics-server-9975d5f86-xqhk9_kube-system(ba5c6fa2-191f-4ae2-8aee-b1075a50b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0316 18:17:27.136211  838136 out.go:239]   Mar 16 18:17:01 old-k8s-version-985498 kubelet[888]: E0316 18:17:01.346320     888 pod_workers.go:191] Error syncing pod c592d222-aa6b-4b6a-ad65-450b94be4b65 ("dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"
	W0316 18:17:27.136229  838136 out.go:239]   Mar 16 18:17:08 old-k8s-version-985498 kubelet[888]: E0316 18:17:08.362954     888 pod_workers.go:191] Error syncing pod ba5c6fa2-191f-4ae2-8aee-b1075a50b37b ("metrics-server-9975d5f86-xqhk9_kube-system(ba5c6fa2-191f-4ae2-8aee-b1075a50b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host"
	W0316 18:17:27.136246  838136 out.go:239]   Mar 16 18:17:16 old-k8s-version-985498 kubelet[888]: E0316 18:17:16.346879     888 pod_workers.go:191] Error syncing pod c592d222-aa6b-4b6a-ad65-450b94be4b65 ("dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"
	W0316 18:17:27.136263  838136 out.go:239]   Mar 16 18:17:23 old-k8s-version-985498 kubelet[888]: E0316 18:17:23.347609     888 pod_workers.go:191] Error syncing pod ba5c6fa2-191f-4ae2-8aee-b1075a50b37b ("metrics-server-9975d5f86-xqhk9_kube-system(ba5c6fa2-191f-4ae2-8aee-b1075a50b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0316 18:17:27.136276  838136 out.go:304] Setting ErrFile to fd 2...
	I0316 18:17:27.136283  838136 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0316 18:17:28.494615  841431 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0316 18:17:28.494636  841431 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0316 18:17:28.494664  841431 main.go:141] libmachine: (newest-cni-993416) Calling .GetSSHHostname
	I0316 18:17:28.498412  841431 main.go:141] libmachine: (newest-cni-993416) DBG | domain newest-cni-993416 has defined MAC address 52:54:00:73:0d:0a in network mk-newest-cni-993416
	I0316 18:17:28.498867  841431 main.go:141] libmachine: (newest-cni-993416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:0d:0a", ip: ""} in network mk-newest-cni-993416: {Iface:virbr4 ExpiryTime:2024-03-16 19:17:06 +0000 UTC Type:0 Mac:52:54:00:73:0d:0a Iaid: IPaddr:192.168.72.228 Prefix:24 Hostname:newest-cni-993416 Clientid:01:52:54:00:73:0d:0a}
	I0316 18:17:28.498902  841431 main.go:141] libmachine: (newest-cni-993416) DBG | domain newest-cni-993416 has defined IP address 192.168.72.228 and MAC address 52:54:00:73:0d:0a in network mk-newest-cni-993416
	I0316 18:17:28.499137  841431 main.go:141] libmachine: (newest-cni-993416) Calling .GetSSHPort
	I0316 18:17:28.499360  841431 main.go:141] libmachine: (newest-cni-993416) Calling .GetSSHKeyPath
	I0316 18:17:28.499603  841431 main.go:141] libmachine: (newest-cni-993416) Calling .GetSSHUsername
	I0316 18:17:28.499803  841431 sshutil.go:53] new ssh client: &{IP:192.168.72.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18277-781196/.minikube/machines/newest-cni-993416/id_rsa Username:docker}
	I0316 18:17:28.507069  841431 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42097
	I0316 18:17:28.507685  841431 main.go:141] libmachine: () Calling .GetVersion
	I0316 18:17:28.508358  841431 main.go:141] libmachine: Using API Version  1
	I0316 18:17:28.508388  841431 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 18:17:28.508855  841431 main.go:141] libmachine: () Calling .GetMachineName
	I0316 18:17:28.509080  841431 main.go:141] libmachine: (newest-cni-993416) Calling .GetState
	I0316 18:17:28.510986  841431 main.go:141] libmachine: (newest-cni-993416) Calling .DriverName
	I0316 18:17:28.511289  841431 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0316 18:17:28.511318  841431 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0316 18:17:28.511342  841431 main.go:141] libmachine: (newest-cni-993416) Calling .GetSSHHostname
	I0316 18:17:28.515154  841431 main.go:141] libmachine: (newest-cni-993416) DBG | domain newest-cni-993416 has defined MAC address 52:54:00:73:0d:0a in network mk-newest-cni-993416
	I0316 18:17:28.515818  841431 main.go:141] libmachine: (newest-cni-993416) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:0d:0a", ip: ""} in network mk-newest-cni-993416: {Iface:virbr4 ExpiryTime:2024-03-16 19:17:06 +0000 UTC Type:0 Mac:52:54:00:73:0d:0a Iaid: IPaddr:192.168.72.228 Prefix:24 Hostname:newest-cni-993416 Clientid:01:52:54:00:73:0d:0a}
	I0316 18:17:28.515843  841431 main.go:141] libmachine: (newest-cni-993416) DBG | domain newest-cni-993416 has defined IP address 192.168.72.228 and MAC address 52:54:00:73:0d:0a in network mk-newest-cni-993416
	I0316 18:17:28.516129  841431 main.go:141] libmachine: (newest-cni-993416) Calling .GetSSHPort
	I0316 18:17:28.516364  841431 main.go:141] libmachine: (newest-cni-993416) Calling .GetSSHKeyPath
	I0316 18:17:28.516500  841431 main.go:141] libmachine: (newest-cni-993416) Calling .GetSSHUsername
	I0316 18:17:28.516679  841431 sshutil.go:53] new ssh client: &{IP:192.168.72.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18277-781196/.minikube/machines/newest-cni-993416/id_rsa Username:docker}
	I0316 18:17:28.691344  841431 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0316 18:17:28.716545  841431 api_server.go:52] waiting for apiserver process to appear ...
	I0316 18:17:28.716654  841431 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 18:17:28.740425  841431 api_server.go:72] duration metric: took 314.168152ms to wait for apiserver process to appear ...
	I0316 18:17:28.740454  841431 api_server.go:88] waiting for apiserver healthz status ...
	I0316 18:17:28.740473  841431 api_server.go:253] Checking apiserver healthz at https://192.168.72.228:8443/healthz ...
	I0316 18:17:28.753421  841431 api_server.go:279] https://192.168.72.228:8443/healthz returned 200:
	ok
	I0316 18:17:28.755222  841431 api_server.go:141] control plane version: v1.29.0-rc.2
	I0316 18:17:28.755253  841431 api_server.go:131] duration metric: took 14.791646ms to wait for apiserver health ...
	I0316 18:17:28.755263  841431 system_pods.go:43] waiting for kube-system pods to appear ...
	I0316 18:17:28.766459  841431 system_pods.go:59] 9 kube-system pods found
	I0316 18:17:28.766499  841431 system_pods.go:61] "coredns-76f75df574-hkkkh" [efd50172-4179-4235-adcf-2cc14383680d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0316 18:17:28.766512  841431 system_pods.go:61] "coredns-76f75df574-rhrkz" [3f5fe20f-4f2b-4dad-ab54-c00261ce77fb] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0316 18:17:28.766520  841431 system_pods.go:61] "etcd-newest-cni-993416" [f9d9e16d-4c48-41ef-954d-84b2adc1d678] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0316 18:17:28.766526  841431 system_pods.go:61] "kube-apiserver-newest-cni-993416" [b745c8a8-8c3a-48a8-8884-8952190b871e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0316 18:17:28.766532  841431 system_pods.go:61] "kube-controller-manager-newest-cni-993416" [d0879001-bfc2-4268-a421-9257bc6155cc] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0316 18:17:28.766540  841431 system_pods.go:61] "kube-proxy-lbfnv" [4269401d-14f7-4752-a7df-ec3f9da042d0] Running
	I0316 18:17:28.766584  841431 system_pods.go:61] "kube-scheduler-newest-cni-993416" [53741680-de3a-449b-ab2b-a520bc8c2c54] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0316 18:17:28.766593  841431 system_pods.go:61] "metrics-server-57f55c9bc5-rbrmj" [3eabea78-4346-49ea-ada5-72c98a6daa7d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0316 18:17:28.766598  841431 system_pods.go:61] "storage-provisioner" [0d551c52-212b-4b92-9b76-e1034e2d8d0b] Running
	I0316 18:17:28.766604  841431 system_pods.go:74] duration metric: took 11.334758ms to wait for pod list to return data ...
	I0316 18:17:28.766612  841431 default_sa.go:34] waiting for default service account to be created ...
	I0316 18:17:28.772813  841431 default_sa.go:45] found service account: "default"
	I0316 18:17:28.772841  841431 default_sa.go:55] duration metric: took 6.223203ms for default service account to be created ...
	I0316 18:17:28.772853  841431 kubeadm.go:576] duration metric: took 346.603392ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0316 18:17:28.772869  841431 node_conditions.go:102] verifying NodePressure condition ...
	I0316 18:17:28.782511  841431 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0316 18:17:28.782538  841431 node_conditions.go:123] node cpu capacity is 2
	I0316 18:17:28.782550  841431 node_conditions.go:105] duration metric: took 9.676004ms to run NodePressure ...
	I0316 18:17:28.782562  841431 start.go:240] waiting for startup goroutines ...
	I0316 18:17:28.813219  841431 addons.go:426] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0316 18:17:28.813256  841431 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0316 18:17:28.858302  841431 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0316 18:17:28.861227  841431 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0316 18:17:28.886196  841431 addons.go:426] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0316 18:17:28.886233  841431 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0316 18:17:28.983213  841431 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0316 18:17:28.983243  841431 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0316 18:17:28.989906  841431 addons.go:426] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0316 18:17:28.989932  841431 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0316 18:17:29.121908  841431 addons.go:426] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0316 18:17:29.121935  841431 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0316 18:17:29.124194  841431 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0316 18:17:29.124236  841431 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0316 18:17:29.210699  841431 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0316 18:17:29.210731  841431 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0316 18:17:29.258617  841431 addons.go:426] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0316 18:17:29.258661  841431 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0316 18:17:29.360734  841431 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0316 18:17:29.383687  841431 addons.go:426] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0316 18:17:29.383712  841431 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0316 18:17:29.461299  841431 addons.go:426] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0316 18:17:29.461340  841431 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0316 18:17:29.515787  841431 addons.go:426] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0316 18:17:29.515831  841431 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0316 18:17:29.593488  841431 addons.go:426] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0316 18:17:29.593525  841431 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0316 18:17:29.669463  841431 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0316 18:17:30.700709  841431 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.842355565s)
	I0316 18:17:30.700792  841431 main.go:141] libmachine: Making call to close driver server
	I0316 18:17:30.700808  841431 main.go:141] libmachine: (newest-cni-993416) Calling .Close
	I0316 18:17:30.700883  841431 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.340115776s)
	I0316 18:17:30.700967  841431 main.go:141] libmachine: Making call to close driver server
	I0316 18:17:30.700994  841431 main.go:141] libmachine: (newest-cni-993416) Calling .Close
	I0316 18:17:30.701312  841431 main.go:141] libmachine: Successfully made call to close driver server
	I0316 18:17:30.701331  841431 main.go:141] libmachine: Making call to close connection to plugin binary
	I0316 18:17:30.701351  841431 main.go:141] libmachine: Making call to close driver server
	I0316 18:17:30.701363  841431 main.go:141] libmachine: (newest-cni-993416) Calling .Close
	I0316 18:17:30.701516  841431 main.go:141] libmachine: (newest-cni-993416) DBG | Closing plugin on server side
	I0316 18:17:30.701561  841431 main.go:141] libmachine: Successfully made call to close driver server
	I0316 18:17:30.701594  841431 main.go:141] libmachine: Making call to close connection to plugin binary
	I0316 18:17:30.701607  841431 main.go:141] libmachine: Making call to close driver server
	I0316 18:17:30.700801  841431 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.839529172s)
	I0316 18:17:30.701662  841431 main.go:141] libmachine: Making call to close driver server
	I0316 18:17:30.701676  841431 main.go:141] libmachine: (newest-cni-993416) Calling .Close
	I0316 18:17:30.701622  841431 main.go:141] libmachine: (newest-cni-993416) Calling .Close
	I0316 18:17:30.701822  841431 main.go:141] libmachine: Successfully made call to close driver server
	I0316 18:17:30.701844  841431 main.go:141] libmachine: Making call to close connection to plugin binary
	I0316 18:17:30.702168  841431 main.go:141] libmachine: Successfully made call to close driver server
	I0316 18:17:30.702181  841431 main.go:141] libmachine: Making call to close connection to plugin binary
	I0316 18:17:30.702190  841431 main.go:141] libmachine: Making call to close driver server
	I0316 18:17:30.702197  841431 main.go:141] libmachine: (newest-cni-993416) Calling .Close
	I0316 18:17:30.702313  841431 main.go:141] libmachine: (newest-cni-993416) DBG | Closing plugin on server side
	I0316 18:17:30.702386  841431 main.go:141] libmachine: Successfully made call to close driver server
	I0316 18:17:30.702691  841431 main.go:141] libmachine: Making call to close connection to plugin binary
	I0316 18:17:30.702704  841431 addons.go:470] Verifying addon metrics-server=true in "newest-cni-993416"
	I0316 18:17:30.702483  841431 main.go:141] libmachine: Successfully made call to close driver server
	I0316 18:17:30.702787  841431 main.go:141] libmachine: Making call to close connection to plugin binary
	I0316 18:17:30.702587  841431 main.go:141] libmachine: (newest-cni-993416) DBG | Closing plugin on server side
	I0316 18:17:30.711143  841431 main.go:141] libmachine: Making call to close driver server
	I0316 18:17:30.711187  841431 main.go:141] libmachine: (newest-cni-993416) Calling .Close
	I0316 18:17:30.711543  841431 main.go:141] libmachine: (newest-cni-993416) DBG | Closing plugin on server side
	I0316 18:17:30.711601  841431 main.go:141] libmachine: Successfully made call to close driver server
	I0316 18:17:30.711626  841431 main.go:141] libmachine: Making call to close connection to plugin binary
	I0316 18:17:31.260529  841431 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.590970849s)
	I0316 18:17:31.260604  841431 main.go:141] libmachine: Making call to close driver server
	I0316 18:17:31.260620  841431 main.go:141] libmachine: (newest-cni-993416) Calling .Close
	I0316 18:17:31.261040  841431 main.go:141] libmachine: (newest-cni-993416) DBG | Closing plugin on server side
	I0316 18:17:31.261069  841431 main.go:141] libmachine: Successfully made call to close driver server
	I0316 18:17:31.261120  841431 main.go:141] libmachine: Making call to close connection to plugin binary
	I0316 18:17:31.261129  841431 main.go:141] libmachine: Making call to close driver server
	I0316 18:17:31.261137  841431 main.go:141] libmachine: (newest-cni-993416) Calling .Close
	I0316 18:17:31.261437  841431 main.go:141] libmachine: Successfully made call to close driver server
	I0316 18:17:31.261459  841431 main.go:141] libmachine: Making call to close connection to plugin binary
	I0316 18:17:31.263509  841431 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-993416 addons enable metrics-server
	
	I0316 18:17:31.265109  841431 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass, dashboard
	I0316 18:17:31.266627  841431 addons.go:505] duration metric: took 2.840342384s for enable addons: enabled=[storage-provisioner metrics-server default-storageclass dashboard]
	I0316 18:17:31.266687  841431 start.go:245] waiting for cluster config update ...
	I0316 18:17:31.266702  841431 start.go:254] writing updated cluster config ...
	I0316 18:17:31.266974  841431 ssh_runner.go:195] Run: rm -f paused
	I0316 18:17:31.321868  841431 start.go:600] kubectl: 1.29.3, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0316 18:17:31.323761  841431 out.go:177] * Done! kubectl is now configured to use "newest-cni-993416" cluster and "default" namespace by default
	I0316 18:17:37.137763  838136 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 18:17:37.159011  838136 api_server.go:72] duration metric: took 6m0.980190849s to wait for apiserver process to appear ...
	I0316 18:17:37.159048  838136 api_server.go:88] waiting for apiserver healthz status ...
	I0316 18:17:37.161262  838136 out.go:177] 
	W0316 18:17:37.162843  838136 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: cluster wait timed out during healthz check
	W0316 18:17:37.162874  838136 out.go:239] * 
	W0316 18:17:37.163764  838136 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0316 18:17:37.165696  838136 out.go:177] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                        ATTEMPT             POD ID              POD
	4575a17a262fe       523cad1a4df73       About a minute ago   Exited              dashboard-metrics-scraper   5                   fa71cda057018       dashboard-metrics-scraper-8d5bb5db8-sztdk
	aba262227c6f6       07655ddf2eebe       4 minutes ago        Running             kubernetes-dashboard        0                   33855e9a8d54b       kubernetes-dashboard-cd95d586-656nk
	89c765def3f3a       56cc512116c8f       5 minutes ago        Running             busybox                     0                   7057fc81b7e07       busybox
	05061990c3ccf       b9fa1895dcaa6       5 minutes ago        Running             kube-controller-manager     1                   5beca916d73cc       kube-controller-manager-old-k8s-version-985498
	aa120a5aa0d88       6e38f40d628db       5 minutes ago        Running             storage-provisioner         1                   0879e17dc3891       storage-provisioner
	d73b58bba3532       10cc881966cfd       5 minutes ago        Running             kube-proxy                  0                   57eefc4089687       kube-proxy-nvd4k
	61efb30968d2b       bfe3a36ebd252       5 minutes ago        Running             coredns                     0                   bfd9c69418b66       coredns-74ff55c5b-p8874
	7ed441150c733       6e38f40d628db       6 minutes ago        Exited              storage-provisioner         0                   0879e17dc3891       storage-provisioner
	84cebb4cfc43d       ca9843d3b5454       6 minutes ago        Running             kube-apiserver              0                   a118956a32a95       kube-apiserver-old-k8s-version-985498
	2434210f6c63b       0369cf4303ffd       6 minutes ago        Running             etcd                        0                   5c82c8921bb2b       etcd-old-k8s-version-985498
	162132fbe06fe       b9fa1895dcaa6       6 minutes ago        Exited              kube-controller-manager     0                   5beca916d73cc       kube-controller-manager-old-k8s-version-985498
	34b075a6e3dfe       3138b6e3d4712       6 minutes ago        Running             kube-scheduler              0                   30ac6cb133c85       kube-scheduler-old-k8s-version-985498
	
	
	==> containerd <==
	Mar 16 18:14:21 old-k8s-version-985498 containerd[624]: time="2024-03-16T18:14:21.357676112Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host" host=fake.domain
	Mar 16 18:14:21 old-k8s-version-985498 containerd[624]: time="2024-03-16T18:14:21.359968980Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host"
	Mar 16 18:14:21 old-k8s-version-985498 containerd[624]: time="2024-03-16T18:14:21.360097886Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	Mar 16 18:14:28 old-k8s-version-985498 containerd[624]: time="2024-03-16T18:14:28.350109352Z" level=info msg="CreateContainer within sandbox \"fa71cda057018df49c80e723cf5af396685445ce28a9407dff4fef15f719ecb4\" for container name:\"dashboard-metrics-scraper\" attempt:4"
	Mar 16 18:14:28 old-k8s-version-985498 containerd[624]: time="2024-03-16T18:14:28.381226527Z" level=info msg="CreateContainer within sandbox \"fa71cda057018df49c80e723cf5af396685445ce28a9407dff4fef15f719ecb4\" for name:\"dashboard-metrics-scraper\" attempt:4 returns container id \"00d3a2ef8f92ccbd0f8ca460fee66cd544f14eedf61b1862c11a627c49c5b8bc\""
	Mar 16 18:14:28 old-k8s-version-985498 containerd[624]: time="2024-03-16T18:14:28.382660562Z" level=info msg="StartContainer for \"00d3a2ef8f92ccbd0f8ca460fee66cd544f14eedf61b1862c11a627c49c5b8bc\""
	Mar 16 18:14:28 old-k8s-version-985498 containerd[624]: time="2024-03-16T18:14:28.508182987Z" level=info msg="StartContainer for \"00d3a2ef8f92ccbd0f8ca460fee66cd544f14eedf61b1862c11a627c49c5b8bc\" returns successfully"
	Mar 16 18:14:28 old-k8s-version-985498 containerd[624]: time="2024-03-16T18:14:28.569953767Z" level=info msg="shim disconnected" id=00d3a2ef8f92ccbd0f8ca460fee66cd544f14eedf61b1862c11a627c49c5b8bc namespace=k8s.io
	Mar 16 18:14:28 old-k8s-version-985498 containerd[624]: time="2024-03-16T18:14:28.570131684Z" level=warning msg="cleaning up after shim disconnected" id=00d3a2ef8f92ccbd0f8ca460fee66cd544f14eedf61b1862c11a627c49c5b8bc namespace=k8s.io
	Mar 16 18:14:28 old-k8s-version-985498 containerd[624]: time="2024-03-16T18:14:28.570263917Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Mar 16 18:14:29 old-k8s-version-985498 containerd[624]: time="2024-03-16T18:14:29.182786591Z" level=info msg="RemoveContainer for \"962b79b13ecab4697776bb614c5c4f1d9268a209277dfcfa5e541e5bf59f9c0f\""
	Mar 16 18:14:29 old-k8s-version-985498 containerd[624]: time="2024-03-16T18:14:29.192113313Z" level=info msg="RemoveContainer for \"962b79b13ecab4697776bb614c5c4f1d9268a209277dfcfa5e541e5bf59f9c0f\" returns successfully"
	Mar 16 18:15:57 old-k8s-version-985498 containerd[624]: time="2024-03-16T18:15:57.351505859Z" level=info msg="CreateContainer within sandbox \"fa71cda057018df49c80e723cf5af396685445ce28a9407dff4fef15f719ecb4\" for container name:\"dashboard-metrics-scraper\" attempt:5"
	Mar 16 18:15:57 old-k8s-version-985498 containerd[624]: time="2024-03-16T18:15:57.385557581Z" level=info msg="CreateContainer within sandbox \"fa71cda057018df49c80e723cf5af396685445ce28a9407dff4fef15f719ecb4\" for name:\"dashboard-metrics-scraper\" attempt:5 returns container id \"4575a17a262fe6a5525e62f8abf26d7ac29469e6cea1c9bf055e9109b155c439\""
	Mar 16 18:15:57 old-k8s-version-985498 containerd[624]: time="2024-03-16T18:15:57.387778244Z" level=info msg="StartContainer for \"4575a17a262fe6a5525e62f8abf26d7ac29469e6cea1c9bf055e9109b155c439\""
	Mar 16 18:15:57 old-k8s-version-985498 containerd[624]: time="2024-03-16T18:15:57.555444587Z" level=info msg="StartContainer for \"4575a17a262fe6a5525e62f8abf26d7ac29469e6cea1c9bf055e9109b155c439\" returns successfully"
	Mar 16 18:15:57 old-k8s-version-985498 containerd[624]: time="2024-03-16T18:15:57.641017202Z" level=info msg="shim disconnected" id=4575a17a262fe6a5525e62f8abf26d7ac29469e6cea1c9bf055e9109b155c439 namespace=k8s.io
	Mar 16 18:15:57 old-k8s-version-985498 containerd[624]: time="2024-03-16T18:15:57.641124050Z" level=warning msg="cleaning up after shim disconnected" id=4575a17a262fe6a5525e62f8abf26d7ac29469e6cea1c9bf055e9109b155c439 namespace=k8s.io
	Mar 16 18:15:57 old-k8s-version-985498 containerd[624]: time="2024-03-16T18:15:57.641143875Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Mar 16 18:15:58 old-k8s-version-985498 containerd[624]: time="2024-03-16T18:15:58.529322943Z" level=info msg="RemoveContainer for \"00d3a2ef8f92ccbd0f8ca460fee66cd544f14eedf61b1862c11a627c49c5b8bc\""
	Mar 16 18:15:58 old-k8s-version-985498 containerd[624]: time="2024-03-16T18:15:58.535806748Z" level=info msg="RemoveContainer for \"00d3a2ef8f92ccbd0f8ca460fee66cd544f14eedf61b1862c11a627c49c5b8bc\" returns successfully"
	Mar 16 18:17:08 old-k8s-version-985498 containerd[624]: time="2024-03-16T18:17:08.348926444Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Mar 16 18:17:08 old-k8s-version-985498 containerd[624]: time="2024-03-16T18:17:08.358550650Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host" host=fake.domain
	Mar 16 18:17:08 old-k8s-version-985498 containerd[624]: time="2024-03-16T18:17:08.361090149Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host"
	Mar 16 18:17:08 old-k8s-version-985498 containerd[624]: time="2024-03-16T18:17:08.361281019Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	
	
	==> coredns [61efb30968d2bf3bd0aff15b70ec1a33c3654d61c5164cc2879e18ef21cd1b77] <==
	I0316 18:12:18.847750       1 trace.go:116] Trace[2019727887]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-03-16 18:11:48.845664254 +0000 UTC m=+0.081462614) (total time: 30.001433121s):
	Trace[2019727887]: [30.001433121s] [30.001433121s] END
	E0316 18:12:18.847898       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0316 18:12:18.847999       1 trace.go:116] Trace[1427131847]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-03-16 18:11:48.846290043 +0000 UTC m=+0.082088394) (total time: 30.001041211s):
	Trace[1427131847]: [30.001041211s] [30.001041211s] END
	E0316 18:12:18.848068       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0316 18:12:18.848103       1 trace.go:116] Trace[939984059]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-03-16 18:11:48.845209621 +0000 UTC m=+0.081007992) (total time: 30.002393876s):
	Trace[939984059]: [30.002393876s] [30.002393876s] END
	E0316 18:12:18.848200       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	.:53
	[INFO] plugin/reload: Running configuration MD5 = 0c3216a78d32f257fd8c644ead867e29
	CoreDNS-1.7.0
	linux/amd64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:35912 - 52941 "HINFO IN 7891349533246800731.8101106274944321035. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.02827184s
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.7.0
	linux/amd64, go1.14.4, f59c03d
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-985498
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-985498
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=dcb7bcec19ba52ac09364e1139fb2071215a1bc6
	                    minikube.k8s.io/name=old-k8s-version-985498
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_16T18_07_16_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 16 Mar 2024 18:07:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-985498
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 16 Mar 2024 18:17:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 16 Mar 2024 18:13:03 +0000   Sat, 16 Mar 2024 18:07:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 16 Mar 2024 18:13:03 +0000   Sat, 16 Mar 2024 18:07:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 16 Mar 2024 18:13:03 +0000   Sat, 16 Mar 2024 18:07:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 16 Mar 2024 18:13:03 +0000   Sat, 16 Mar 2024 18:11:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.233
	  Hostname:    old-k8s-version-985498
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 4104b957f1564a25a1b06b701038e2d3
	  System UUID:                4104b957-f156-4a25-a1b0-6b701038e2d3
	  Boot ID:                    f0635e75-e914-462e-b0f6-4dfb2f2adbc1
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.14
	  Kubelet Version:            v1.20.0
	  Kube-Proxy Version:         v1.20.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m7s
	  kube-system                 coredns-74ff55c5b-p8874                           100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     10m
	  kube-system                 etcd-old-k8s-version-985498                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         10m
	  kube-system                 kube-apiserver-old-k8s-version-985498             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-controller-manager-old-k8s-version-985498    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-proxy-nvd4k                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-scheduler-old-k8s-version-985498             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 metrics-server-9975d5f86-xqhk9                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         8m56s
	  kube-system                 storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kubernetes-dashboard        dashboard-metrics-scraper-8d5bb5db8-sztdk         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m8s
	  kubernetes-dashboard        kubernetes-dashboard-cd95d586-656nk               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m8s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  100Mi (0%!)(MISSING)   0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 10m                    kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  10m (x4 over 10m)      kubelet     Node old-k8s-version-985498 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x3 over 10m)      kubelet     Node old-k8s-version-985498 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x4 over 10m)      kubelet     Node old-k8s-version-985498 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                    kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 10m                    kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  10m                    kubelet     Node old-k8s-version-985498 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m                    kubelet     Node old-k8s-version-985498 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m                    kubelet     Node old-k8s-version-985498 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                10m                    kubelet     Node old-k8s-version-985498 status is now: NodeReady
	  Normal  Starting                 10m                    kube-proxy  Starting kube-proxy.
	  Normal  Starting                 6m39s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m39s (x9 over 6m39s)  kubelet     Node old-k8s-version-985498 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m39s (x7 over 6m39s)  kubelet     Node old-k8s-version-985498 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m39s (x7 over 6m39s)  kubelet     Node old-k8s-version-985498 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m39s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 5m49s                  kube-proxy  Starting kube-proxy.
	
	
	==> dmesg <==
	[  +4.774322] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.740197] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.787539] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.588929] systemd-fstab-generator[488]: Ignoring "noauto" option for root device
	[  +0.075287] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.075824] systemd-fstab-generator[500]: Ignoring "noauto" option for root device
	[  +0.201911] systemd-fstab-generator[514]: Ignoring "noauto" option for root device
	[  +0.131486] systemd-fstab-generator[526]: Ignoring "noauto" option for root device
	[  +0.379126] systemd-fstab-generator[555]: Ignoring "noauto" option for root device
	[  +6.884463] systemd-fstab-generator[616]: Ignoring "noauto" option for root device
	[  +0.070466] kauditd_printk_skb: 158 callbacks suppressed
	[  +2.694758] systemd-fstab-generator[755]: Ignoring "noauto" option for root device
	[  +2.245908] systemd-fstab-generator[881]: Ignoring "noauto" option for root device
	[  +0.061702] kauditd_printk_skb: 46 callbacks suppressed
	[  +5.534369] kauditd_printk_skb: 18 callbacks suppressed
	[Mar16 18:11] kauditd_printk_skb: 26 callbacks suppressed
	[ +21.757416] kauditd_printk_skb: 6 callbacks suppressed
	[  +2.035387] systemd-fstab-generator[1480]: Ignoring "noauto" option for root device
	[ +12.185563] kauditd_printk_skb: 32 callbacks suppressed
	[Mar16 18:12] kauditd_printk_skb: 31 callbacks suppressed
	[ +12.054497] kauditd_printk_skb: 6 callbacks suppressed
	[ +20.479846] kauditd_printk_skb: 14 callbacks suppressed
	[  +5.494496] kauditd_printk_skb: 4 callbacks suppressed
	
	
	==> etcd [2434210f6c63bec8d2ba7076471915eb02d3219675ee8ac3cab9722cca4f03e9] <==
	2024-03-16 18:13:33.611075 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-16 18:13:43.611098 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-16 18:13:53.611353 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-16 18:14:03.612320 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-16 18:14:13.611743 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-16 18:14:23.611562 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-16 18:14:33.610786 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-16 18:14:43.611174 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-16 18:14:53.611080 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-16 18:15:03.610994 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-16 18:15:13.611341 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-16 18:15:23.611142 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-16 18:15:33.611534 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-16 18:15:43.611071 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-16 18:15:53.611086 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-16 18:16:03.612267 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-16 18:16:13.611102 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-16 18:16:23.611592 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-16 18:16:33.611109 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-16 18:16:43.611113 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-16 18:16:53.610985 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-16 18:17:03.611530 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-16 18:17:13.610943 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-16 18:17:23.610883 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-03-16 18:17:33.611175 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> kernel <==
	 18:17:38 up 7 min,  0 users,  load average: 0.05, 0.27, 0.17
	Linux old-k8s-version-985498 5.10.207 #1 SMP Sat Mar 16 11:53:32 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [84cebb4cfc43d687983d6d41133a762dda43b9399298c00c44f46847e2f61438] <==
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0316 18:14:34.724231       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0316 18:14:41.642110       1 client.go:360] parsed scheme: "passthrough"
	I0316 18:14:41.642485       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0316 18:14:41.642587       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0316 18:15:21.536978       1 client.go:360] parsed scheme: "passthrough"
	I0316 18:15:21.537082       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0316 18:15:21.537094       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0316 18:16:06.282030       1 client.go:360] parsed scheme: "passthrough"
	I0316 18:16:06.282110       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0316 18:16:06.282124       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0316 18:16:33.937530       1 handler_proxy.go:102] no RequestInfo found in the context
	E0316 18:16:33.937824       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0316 18:16:33.937858       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0316 18:16:46.794221       1 client.go:360] parsed scheme: "passthrough"
	I0316 18:16:46.794433       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0316 18:16:46.794968       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0316 18:17:27.584956       1 client.go:360] parsed scheme: "passthrough"
	I0316 18:17:27.585326       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0316 18:17:27.585516       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0316 18:17:33.938437       1 handler_proxy.go:102] no RequestInfo found in the context
	E0316 18:17:33.938563       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0316 18:17:33.938579       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [05061990c3ccf6f330cf21ba541a8be55fca74639e81e4b0d14b30bee51fc554] <==
	E0316 18:13:33.527464       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0316 18:13:37.901958       1 request.go:655] Throttling request took 1.048886779s, request: GET:https://192.168.61.233:8443/apis/scheduling.k8s.io/v1beta1?timeout=32s
	W0316 18:13:38.752975       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0316 18:14:04.030343       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0316 18:14:10.404169       1 request.go:655] Throttling request took 1.046896196s, request: GET:https://192.168.61.233:8443/apis/policy/v1beta1?timeout=32s
	W0316 18:14:11.256109       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0316 18:14:34.533459       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0316 18:14:42.907200       1 request.go:655] Throttling request took 1.047556849s, request: GET:https://192.168.61.233:8443/apis/extensions/v1beta1?timeout=32s
	W0316 18:14:43.761205       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0316 18:15:05.036216       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0316 18:15:15.412308       1 request.go:655] Throttling request took 1.047716137s, request: GET:https://192.168.61.233:8443/apis/extensions/v1beta1?timeout=32s
	W0316 18:15:16.264074       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0316 18:15:35.539966       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0316 18:15:47.914855       1 request.go:655] Throttling request took 1.048066792s, request: GET:https://192.168.61.233:8443/apis/scheduling.k8s.io/v1beta1?timeout=32s
	W0316 18:15:48.766840       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0316 18:16:06.042841       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0316 18:16:20.417848       1 request.go:655] Throttling request took 1.048473373s, request: GET:https://192.168.61.233:8443/apis/events.k8s.io/v1beta1?timeout=32s
	W0316 18:16:21.270089       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0316 18:16:36.545342       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0316 18:16:52.920918       1 request.go:655] Throttling request took 1.048141635s, request: GET:https://192.168.61.233:8443/apis/node.k8s.io/v1?timeout=32s
	W0316 18:16:53.772438       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0316 18:17:07.048102       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0316 18:17:25.423094       1 request.go:655] Throttling request took 1.04771719s, request: GET:https://192.168.61.233:8443/apis/extensions/v1beta1?timeout=32s
	W0316 18:17:26.275068       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0316 18:17:37.551507       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	
	
	==> kube-controller-manager [162132fbe06feefe5047b9977675ebb65d90ca0056d9f9a9c6733dda273afd72] <==
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90 +0x4d
	created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates.(*DynamicFileCAContent).Run
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates/dynamic_cafile_content.go:171 +0x28b
	
	goroutine 145 [select]:
	k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitFor(0xc0010b6020, 0xc0010a20d0, 0xc00009cf60, 0x0, 0x0)
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:539 +0x11d
	k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollUntil(0xdf8475800, 0xc0010a20d0, 0xc00009c0c0, 0x0, 0x0)
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:492 +0xc5
	k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xdf8475800, 0xc0010a20d0, 0xc00009c0c0, 0x0, 0x4764ec8)
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:511 +0xb3
	created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates.(*DynamicFileCAContent).Run
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates/dynamic_cafile_content.go:174 +0x2f9
	
	goroutine 146 [select]:
	k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.contextForChannel.func1(0xc00009c0c0, 0xc0010a20f0, 0x4e0fa60, 0xc0001261c0)
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:279 +0xbd
	created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.contextForChannel
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:278 +0x8c
	
	goroutine 147 [select]:
	k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poller.func1.1(0xc00009d0e0, 0xdf8475800, 0x0, 0xc00009d020)
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:588 +0x17b
	created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poller.func1
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:571 +0x8c
	
	
	==> kube-proxy [d73b58bba35328eea373a801852be747130c9844121cf55bd77643b3531047cd] <==
	I0316 18:07:32.515621       1 node.go:172] Successfully retrieved node IP: 192.168.61.233
	I0316 18:07:32.515775       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.61.233), assume IPv4 operation
	W0316 18:07:32.609914       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0316 18:07:32.610139       1 server_others.go:185] Using iptables Proxier.
	I0316 18:07:32.612161       1 server.go:650] Version: v1.20.0
	I0316 18:07:32.621168       1 config.go:315] Starting service config controller
	I0316 18:07:32.621254       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0316 18:07:32.621463       1 config.go:224] Starting endpoint slice config controller
	I0316 18:07:32.621477       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0316 18:07:32.721776       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0316 18:07:32.722054       1 shared_informer.go:247] Caches are synced for service config 
	I0316 18:11:49.870050       1 node.go:172] Successfully retrieved node IP: 192.168.61.233
	I0316 18:11:49.870121       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.61.233), assume IPv4 operation
	W0316 18:11:49.900204       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0316 18:11:49.900354       1 server_others.go:185] Using iptables Proxier.
	I0316 18:11:49.902013       1 server.go:650] Version: v1.20.0
	I0316 18:11:49.905480       1 config.go:315] Starting service config controller
	I0316 18:11:49.905533       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0316 18:11:49.905564       1 config.go:224] Starting endpoint slice config controller
	I0316 18:11:49.905568       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0316 18:11:50.005949       1 shared_informer.go:247] Caches are synced for service config 
	I0316 18:11:50.006518       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	
	==> kube-scheduler [34b075a6e3dfea5f9806aeb9625651a26b0db86e59f277f6376fd8767fb23b0c] <==
	E0316 18:11:08.498529       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.61.233:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.61.233:8443: connect: connection refused
	E0316 18:11:08.666954       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.61.233:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.61.233:8443: connect: connection refused
	E0316 18:11:08.782890       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.61.233:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.61.233:8443: connect: connection refused
	E0316 18:11:09.459992       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: Get "https://192.168.61.233:8443/apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.61.233:8443: connect: connection refused
	E0316 18:11:10.277532       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.61.233:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.61.233:8443: connect: connection refused
	E0316 18:11:10.304574       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.61.233:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.61.233:8443: connect: connection refused
	E0316 18:11:10.324342       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.61.233:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.61.233:8443: connect: connection refused
	E0316 18:11:10.612240       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.61.233:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.61.233:8443: connect: connection refused
	E0316 18:11:10.947874       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.61.233:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.61.233:8443: connect: connection refused
	E0316 18:11:15.289892       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.61.233:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.61.233:8443: connect: connection refused
	E0316 18:11:17.854143       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.61.233:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.61.233:8443: connect: connection refused
	E0316 18:11:18.226347       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.61.233:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.61.233:8443: connect: connection refused
	E0316 18:11:18.279020       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.61.233:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.61.233:8443: connect: connection refused
	E0316 18:11:18.881823       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.61.233:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.61.233:8443: connect: connection refused
	E0316 18:11:19.281888       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: Get "https://192.168.61.233:8443/apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.61.233:8443: connect: connection refused
	E0316 18:11:19.570252       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.61.233:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.61.233:8443: connect: connection refused
	E0316 18:11:19.649468       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.61.233:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.61.233:8443: connect: connection refused
	E0316 18:11:19.687041       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.61.233:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.61.233:8443: connect: connection refused
	E0316 18:11:19.863870       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.61.233:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.61.233:8443: connect: connection refused
	E0316 18:11:20.039168       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.61.233:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.61.233:8443: connect: connection refused
	E0316 18:11:20.124690       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.61.233:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.61.233:8443: connect: connection refused
	E0316 18:11:32.868473       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0316 18:11:32.872744       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0316 18:11:32.872894       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0316 18:12:09.082486       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kubelet <==
	Mar 16 18:16:03 old-k8s-version-985498 kubelet[888]: E0316 18:16:03.347044     888 pod_workers.go:191] Error syncing pod ba5c6fa2-191f-4ae2-8aee-b1075a50b37b ("metrics-server-9975d5f86-xqhk9_kube-system(ba5c6fa2-191f-4ae2-8aee-b1075a50b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Mar 16 18:16:06 old-k8s-version-985498 kubelet[888]: I0316 18:16:06.688609     888 scope.go:95] [topologymanager] RemoveContainer - Container ID: 4575a17a262fe6a5525e62f8abf26d7ac29469e6cea1c9bf055e9109b155c439
	Mar 16 18:16:06 old-k8s-version-985498 kubelet[888]: E0316 18:16:06.689281     888 pod_workers.go:191] Error syncing pod c592d222-aa6b-4b6a-ad65-450b94be4b65 ("dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"
	Mar 16 18:16:17 old-k8s-version-985498 kubelet[888]: E0316 18:16:17.347194     888 pod_workers.go:191] Error syncing pod ba5c6fa2-191f-4ae2-8aee-b1075a50b37b ("metrics-server-9975d5f86-xqhk9_kube-system(ba5c6fa2-191f-4ae2-8aee-b1075a50b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Mar 16 18:16:19 old-k8s-version-985498 kubelet[888]: I0316 18:16:19.346181     888 scope.go:95] [topologymanager] RemoveContainer - Container ID: 4575a17a262fe6a5525e62f8abf26d7ac29469e6cea1c9bf055e9109b155c439
	Mar 16 18:16:19 old-k8s-version-985498 kubelet[888]: E0316 18:16:19.346699     888 pod_workers.go:191] Error syncing pod c592d222-aa6b-4b6a-ad65-450b94be4b65 ("dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"
	Mar 16 18:16:30 old-k8s-version-985498 kubelet[888]: E0316 18:16:30.348163     888 pod_workers.go:191] Error syncing pod ba5c6fa2-191f-4ae2-8aee-b1075a50b37b ("metrics-server-9975d5f86-xqhk9_kube-system(ba5c6fa2-191f-4ae2-8aee-b1075a50b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Mar 16 18:16:34 old-k8s-version-985498 kubelet[888]: I0316 18:16:34.345859     888 scope.go:95] [topologymanager] RemoveContainer - Container ID: 4575a17a262fe6a5525e62f8abf26d7ac29469e6cea1c9bf055e9109b155c439
	Mar 16 18:16:34 old-k8s-version-985498 kubelet[888]: E0316 18:16:34.346242     888 pod_workers.go:191] Error syncing pod c592d222-aa6b-4b6a-ad65-450b94be4b65 ("dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"
	Mar 16 18:16:41 old-k8s-version-985498 kubelet[888]: E0316 18:16:41.347306     888 pod_workers.go:191] Error syncing pod ba5c6fa2-191f-4ae2-8aee-b1075a50b37b ("metrics-server-9975d5f86-xqhk9_kube-system(ba5c6fa2-191f-4ae2-8aee-b1075a50b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Mar 16 18:16:49 old-k8s-version-985498 kubelet[888]: I0316 18:16:49.346632     888 scope.go:95] [topologymanager] RemoveContainer - Container ID: 4575a17a262fe6a5525e62f8abf26d7ac29469e6cea1c9bf055e9109b155c439
	Mar 16 18:16:49 old-k8s-version-985498 kubelet[888]: E0316 18:16:49.347088     888 pod_workers.go:191] Error syncing pod c592d222-aa6b-4b6a-ad65-450b94be4b65 ("dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"
	Mar 16 18:16:56 old-k8s-version-985498 kubelet[888]: E0316 18:16:56.347531     888 pod_workers.go:191] Error syncing pod ba5c6fa2-191f-4ae2-8aee-b1075a50b37b ("metrics-server-9975d5f86-xqhk9_kube-system(ba5c6fa2-191f-4ae2-8aee-b1075a50b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Mar 16 18:17:01 old-k8s-version-985498 kubelet[888]: I0316 18:17:01.345945     888 scope.go:95] [topologymanager] RemoveContainer - Container ID: 4575a17a262fe6a5525e62f8abf26d7ac29469e6cea1c9bf055e9109b155c439
	Mar 16 18:17:01 old-k8s-version-985498 kubelet[888]: E0316 18:17:01.346320     888 pod_workers.go:191] Error syncing pod c592d222-aa6b-4b6a-ad65-450b94be4b65 ("dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"
	Mar 16 18:17:08 old-k8s-version-985498 kubelet[888]: E0316 18:17:08.361682     888 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain: no such host
	Mar 16 18:17:08 old-k8s-version-985498 kubelet[888]: E0316 18:17:08.362146     888 kuberuntime_image.go:51] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain: no such host
	Mar 16 18:17:08 old-k8s-version-985498 kubelet[888]: E0316 18:17:08.362691     888 kuberuntime_manager.go:829] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:metrics-server-token-9vszw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Handler:Handler{Exec
:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-9975d5f86-xqhk9_kube-system(ba5c6fa
2-191f-4ae2-8aee-b1075a50b37b): ErrImagePull: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain: no such host
	Mar 16 18:17:08 old-k8s-version-985498 kubelet[888]: E0316 18:17:08.362954     888 pod_workers.go:191] Error syncing pod ba5c6fa2-191f-4ae2-8aee-b1075a50b37b ("metrics-server-9975d5f86-xqhk9_kube-system(ba5c6fa2-191f-4ae2-8aee-b1075a50b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host"
	Mar 16 18:17:16 old-k8s-version-985498 kubelet[888]: I0316 18:17:16.346245     888 scope.go:95] [topologymanager] RemoveContainer - Container ID: 4575a17a262fe6a5525e62f8abf26d7ac29469e6cea1c9bf055e9109b155c439
	Mar 16 18:17:16 old-k8s-version-985498 kubelet[888]: E0316 18:17:16.346879     888 pod_workers.go:191] Error syncing pod c592d222-aa6b-4b6a-ad65-450b94be4b65 ("dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"
	Mar 16 18:17:23 old-k8s-version-985498 kubelet[888]: E0316 18:17:23.347609     888 pod_workers.go:191] Error syncing pod ba5c6fa2-191f-4ae2-8aee-b1075a50b37b ("metrics-server-9975d5f86-xqhk9_kube-system(ba5c6fa2-191f-4ae2-8aee-b1075a50b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Mar 16 18:17:30 old-k8s-version-985498 kubelet[888]: I0316 18:17:30.346010     888 scope.go:95] [topologymanager] RemoveContainer - Container ID: 4575a17a262fe6a5525e62f8abf26d7ac29469e6cea1c9bf055e9109b155c439
	Mar 16 18:17:30 old-k8s-version-985498 kubelet[888]: E0316 18:17:30.346307     888 pod_workers.go:191] Error syncing pod c592d222-aa6b-4b6a-ad65-450b94be4b65 ("dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sztdk_kubernetes-dashboard(c592d222-aa6b-4b6a-ad65-450b94be4b65)"
	Mar 16 18:17:35 old-k8s-version-985498 kubelet[888]: E0316 18:17:35.347054     888 pod_workers.go:191] Error syncing pod ba5c6fa2-191f-4ae2-8aee-b1075a50b37b ("metrics-server-9975d5f86-xqhk9_kube-system(ba5c6fa2-191f-4ae2-8aee-b1075a50b37b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	
	
	==> kubernetes-dashboard [aba262227c6f69883d13fafc927cfe64d82292e8029ae85f3213b3f2148c23e3] <==
	2024/03/16 18:12:42 Using namespace: kubernetes-dashboard
	2024/03/16 18:12:42 Using in-cluster config to connect to apiserver
	2024/03/16 18:12:42 Using secret token for csrf signing
	2024/03/16 18:12:42 Initializing csrf token from kubernetes-dashboard-csrf secret
	2024/03/16 18:12:42 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2024/03/16 18:12:42 Successful initial request to the apiserver, version: v1.20.0
	2024/03/16 18:12:42 Generating JWE encryption key
	2024/03/16 18:12:42 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2024/03/16 18:12:42 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2024/03/16 18:12:42 Initializing JWE encryption key from synchronized object
	2024/03/16 18:12:42 Creating in-cluster Sidecar client
	2024/03/16 18:12:42 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/03/16 18:12:42 Serving insecurely on HTTP port: 9090
	2024/03/16 18:13:12 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/03/16 18:13:42 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/03/16 18:14:12 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/03/16 18:14:42 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/03/16 18:15:12 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/03/16 18:15:42 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/03/16 18:16:12 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/03/16 18:16:42 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/03/16 18:17:12 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/03/16 18:12:42 Starting overwatch
	
	
	==> storage-provisioner [7ed441150c7335e02b0c6b3ac696c632796c0d1229fc30b38f78d02560c87aa6] <==
	I0316 18:07:33.757039       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0316 18:07:33.776555       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0316 18:07:33.777196       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0316 18:07:33.790129       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0316 18:07:33.790801       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e782b13f-3eff-4a6c-92ef-a1c6f20af052", APIVersion:"v1", ResourceVersion:"461", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-985498_88ac7e1c-8aa3-4db6-952b-13e965f374da became leader
	I0316 18:07:33.791280       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-985498_88ac7e1c-8aa3-4db6-952b-13e965f374da!
	I0316 18:07:33.895908       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-985498_88ac7e1c-8aa3-4db6-952b-13e965f374da!
	I0316 18:11:34.660606       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0316 18:12:04.672548       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [aa120a5aa0d886b8cd2c321b4b358ee6299f67e9b4a59d4782345a8088bff5c8] <==
	I0316 18:12:05.757843       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0316 18:12:05.778709       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0316 18:12:05.779146       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0316 18:12:23.216321       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0316 18:12:23.217470       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e782b13f-3eff-4a6c-92ef-a1c6f20af052", APIVersion:"v1", ResourceVersion:"742", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-985498_471419ea-4639-4cae-8958-0884def8dfa9 became leader
	I0316 18:12:23.220625       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-985498_471419ea-4639-4cae-8958-0884def8dfa9!
	I0316 18:12:23.322608       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-985498_471419ea-4639-4cae-8958-0884def8dfa9!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-985498 -n old-k8s-version-985498
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-985498 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-9975d5f86-xqhk9
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-985498 describe pod metrics-server-9975d5f86-xqhk9
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-985498 describe pod metrics-server-9975d5f86-xqhk9: exit status 1 (71.933833ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-9975d5f86-xqhk9" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-985498 describe pod metrics-server-9975d5f86-xqhk9: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (445.56s)

                                                
                                    

Test pass (293/333)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 9.48
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.08
9 TestDownloadOnly/v1.20.0/DeleteAll 0.17
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.15
12 TestDownloadOnly/v1.28.4/json-events 6.49
13 TestDownloadOnly/v1.28.4/preload-exists 0
17 TestDownloadOnly/v1.28.4/LogsDuration 0.08
18 TestDownloadOnly/v1.28.4/DeleteAll 0.16
19 TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds 0.15
21 TestDownloadOnly/v1.29.0-rc.2/json-events 5.93
22 TestDownloadOnly/v1.29.0-rc.2/preload-exists 0
26 TestDownloadOnly/v1.29.0-rc.2/LogsDuration 0.08
27 TestDownloadOnly/v1.29.0-rc.2/DeleteAll 0.16
28 TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds 0.15
30 TestBinaryMirror 0.59
31 TestOffline 103.46
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
36 TestAddons/Setup 146.01
38 TestAddons/parallel/Registry 19.15
39 TestAddons/parallel/Ingress 21.42
40 TestAddons/parallel/InspektorGadget 11.96
41 TestAddons/parallel/MetricsServer 5.91
42 TestAddons/parallel/HelmTiller 12.04
44 TestAddons/parallel/CSI 60.56
45 TestAddons/parallel/Headlamp 13.84
46 TestAddons/parallel/CloudSpanner 7
47 TestAddons/parallel/LocalPath 60.06
48 TestAddons/parallel/NvidiaDevicePlugin 5.86
49 TestAddons/parallel/Yakd 6.01
52 TestAddons/serial/GCPAuth/Namespaces 0.14
53 TestAddons/StoppedEnableDisable 92.82
54 TestCertOptions 79.87
55 TestCertExpiration 267.3
57 TestForceSystemdFlag 89.05
58 TestForceSystemdEnv 77.43
60 TestKVMDriverInstallOrUpdate 3.17
64 TestErrorSpam/setup 46.69
65 TestErrorSpam/start 0.42
66 TestErrorSpam/status 0.84
67 TestErrorSpam/pause 1.79
68 TestErrorSpam/unpause 1.94
69 TestErrorSpam/stop 5.34
72 TestFunctional/serial/CopySyncFile 0
73 TestFunctional/serial/StartWithProxy 67.74
74 TestFunctional/serial/AuditLog 0
75 TestFunctional/serial/SoftStart 43.68
76 TestFunctional/serial/KubeContext 0.05
77 TestFunctional/serial/KubectlGetPods 0.09
80 TestFunctional/serial/CacheCmd/cache/add_remote 4.38
81 TestFunctional/serial/CacheCmd/cache/add_local 2.2
82 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.07
83 TestFunctional/serial/CacheCmd/cache/list 0.07
84 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.28
85 TestFunctional/serial/CacheCmd/cache/cache_reload 2.09
86 TestFunctional/serial/CacheCmd/cache/delete 0.13
87 TestFunctional/serial/MinikubeKubectlCmd 0.13
88 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.13
89 TestFunctional/serial/ExtraConfig 44.42
90 TestFunctional/serial/ComponentHealth 0.08
91 TestFunctional/serial/LogsCmd 1.72
92 TestFunctional/serial/LogsFileCmd 1.77
93 TestFunctional/serial/InvalidService 4.62
95 TestFunctional/parallel/ConfigCmd 0.5
96 TestFunctional/parallel/DashboardCmd 25.07
97 TestFunctional/parallel/DryRun 0.36
98 TestFunctional/parallel/InternationalLanguage 0.17
99 TestFunctional/parallel/StatusCmd 0.97
103 TestFunctional/parallel/ServiceCmdConnect 7.65
104 TestFunctional/parallel/AddonsCmd 0.16
105 TestFunctional/parallel/PersistentVolumeClaim 42.92
107 TestFunctional/parallel/SSHCmd 0.46
108 TestFunctional/parallel/CpCmd 1.76
109 TestFunctional/parallel/MySQL 31.09
110 TestFunctional/parallel/FileSync 0.25
111 TestFunctional/parallel/CertSync 1.72
115 TestFunctional/parallel/NodeLabels 0.1
117 TestFunctional/parallel/NonActiveRuntimeDisabled 0.54
119 TestFunctional/parallel/License 0.22
120 TestFunctional/parallel/ServiceCmd/DeployApp 11.35
121 TestFunctional/parallel/Version/short 0.07
122 TestFunctional/parallel/Version/components 0.96
123 TestFunctional/parallel/ImageCommands/ImageListShort 0.29
124 TestFunctional/parallel/ImageCommands/ImageListTable 0.34
125 TestFunctional/parallel/ImageCommands/ImageListJson 0.28
126 TestFunctional/parallel/ImageCommands/ImageListYaml 0.32
127 TestFunctional/parallel/ImageCommands/ImageBuild 4.84
128 TestFunctional/parallel/ImageCommands/Setup 0.97
129 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 5.11
130 TestFunctional/parallel/UpdateContextCmd/no_changes 0.19
131 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.12
132 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.11
142 TestFunctional/parallel/ProfileCmd/profile_not_create 0.51
143 TestFunctional/parallel/ProfileCmd/profile_list 0.37
144 TestFunctional/parallel/ProfileCmd/profile_json_output 0.38
145 TestFunctional/parallel/MountCmd/any-port 6.59
146 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 3.17
147 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 6.62
148 TestFunctional/parallel/ServiceCmd/List 0.34
149 TestFunctional/parallel/ServiceCmd/JSONOutput 0.35
150 TestFunctional/parallel/MountCmd/specific-port 2.23
151 TestFunctional/parallel/ServiceCmd/HTTPS 0.44
152 TestFunctional/parallel/ServiceCmd/Format 0.4
153 TestFunctional/parallel/ServiceCmd/URL 0.43
154 TestFunctional/parallel/MountCmd/VerifyCleanup 1.71
155 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.55
156 TestFunctional/parallel/ImageCommands/ImageRemove 0.71
157 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 3.3
158 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.32
159 TestFunctional/delete_addon-resizer_images 0.07
160 TestFunctional/delete_my-image_image 0.02
161 TestFunctional/delete_minikube_cached_images 0.02
165 TestMultiControlPlane/serial/StartCluster 217.89
166 TestMultiControlPlane/serial/DeployApp 8.59
167 TestMultiControlPlane/serial/PingHostFromPods 1.56
168 TestMultiControlPlane/serial/AddWorkerNode 48.17
169 TestMultiControlPlane/serial/NodeLabels 0.08
170 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.61
171 TestMultiControlPlane/serial/CopyFile 15.12
172 TestMultiControlPlane/serial/StopSecondaryNode 93.27
173 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.44
174 TestMultiControlPlane/serial/RestartSecondaryNode 45.39
175 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.62
176 TestMultiControlPlane/serial/RestartClusterKeepsNodes 492.94
177 TestMultiControlPlane/serial/DeleteSecondaryNode 8.75
178 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.44
179 TestMultiControlPlane/serial/StopCluster 276.76
180 TestMultiControlPlane/serial/RestartCluster 167.38
181 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.43
182 TestMultiControlPlane/serial/AddSecondaryNode 78.9
183 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.61
187 TestJSONOutput/start/Command 100.28
188 TestJSONOutput/start/Audit 0
190 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
191 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
193 TestJSONOutput/pause/Command 0.85
194 TestJSONOutput/pause/Audit 0
196 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
199 TestJSONOutput/unpause/Command 0.72
200 TestJSONOutput/unpause/Audit 0
202 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
203 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
205 TestJSONOutput/stop/Command 7.39
206 TestJSONOutput/stop/Audit 0
208 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
209 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
210 TestErrorJSONOutput 0.25
215 TestMainNoArgs 0.07
216 TestMinikubeProfile 98.34
219 TestMountStart/serial/StartWithMountFirst 28.05
220 TestMountStart/serial/VerifyMountFirst 0.43
221 TestMountStart/serial/StartWithMountSecond 28.68
222 TestMountStart/serial/VerifyMountSecond 0.42
223 TestMountStart/serial/DeleteFirst 0.7
224 TestMountStart/serial/VerifyMountPostDelete 0.42
225 TestMountStart/serial/Stop 1.47
226 TestMountStart/serial/RestartStopped 24.22
227 TestMountStart/serial/VerifyMountPostStop 0.42
230 TestMultiNode/serial/FreshStart2Nodes 106.61
231 TestMultiNode/serial/DeployApp2Nodes 4.33
232 TestMultiNode/serial/PingHostFrom2Pods 0.97
233 TestMultiNode/serial/AddNode 42.31
234 TestMultiNode/serial/MultiNodeLabels 0.07
235 TestMultiNode/serial/ProfileList 0.26
236 TestMultiNode/serial/CopyFile 8.13
237 TestMultiNode/serial/StopNode 2.57
238 TestMultiNode/serial/StartAfterStop 28.18
239 TestMultiNode/serial/RestartKeepsNodes 312.18
240 TestMultiNode/serial/DeleteNode 2.41
241 TestMultiNode/serial/StopMultiNode 184.22
242 TestMultiNode/serial/RestartMultiNode 83.68
243 TestMultiNode/serial/ValidateNameConflict 51.86
248 TestPreload 275.46
250 TestScheduledStopUnix 121.85
254 TestRunningBinaryUpgrade 223.59
256 TestKubernetesUpgrade 220.13
259 TestNoKubernetes/serial/StartNoK8sWithVersion 0.11
263 TestNoKubernetes/serial/StartWithK8s 103.36
268 TestNetworkPlugins/group/false 3.79
272 TestStoppedBinaryUpgrade/Setup 0.45
273 TestNoKubernetes/serial/StartWithStopK8s 47.11
274 TestStoppedBinaryUpgrade/Upgrade 170.2
275 TestNoKubernetes/serial/Start 35.13
276 TestNoKubernetes/serial/VerifyK8sNotRunning 0.26
277 TestNoKubernetes/serial/ProfileList 19.12
278 TestNoKubernetes/serial/Stop 1.55
279 TestNoKubernetes/serial/StartNoArgs 51.84
287 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.23
288 TestStoppedBinaryUpgrade/MinikubeLogs 0.95
290 TestPause/serial/Start 145.23
291 TestNetworkPlugins/group/auto/Start 128.65
292 TestNetworkPlugins/group/kindnet/Start 93.69
293 TestPause/serial/SecondStartNoReconfiguration 45.01
294 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
295 TestNetworkPlugins/group/kindnet/KubeletFlags 0.24
296 TestNetworkPlugins/group/kindnet/NetCatPod 10.26
297 TestNetworkPlugins/group/auto/KubeletFlags 0.28
298 TestNetworkPlugins/group/auto/NetCatPod 9.59
299 TestNetworkPlugins/group/kindnet/DNS 0.21
300 TestNetworkPlugins/group/kindnet/Localhost 0.16
301 TestNetworkPlugins/group/kindnet/HairPin 0.16
302 TestNetworkPlugins/group/auto/DNS 0.19
303 TestNetworkPlugins/group/auto/Localhost 0.16
304 TestNetworkPlugins/group/auto/HairPin 0.16
305 TestPause/serial/Pause 1.01
306 TestPause/serial/VerifyStatus 0.34
307 TestPause/serial/Unpause 0.93
308 TestPause/serial/PauseAgain 1.38
309 TestPause/serial/DeletePaused 1.21
310 TestPause/serial/VerifyDeletedResources 2.03
311 TestNetworkPlugins/group/calico/Start 120.76
312 TestNetworkPlugins/group/custom-flannel/Start 85.55
313 TestNetworkPlugins/group/enable-default-cni/Start 122.63
314 TestNetworkPlugins/group/flannel/Start 156.95
315 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.32
316 TestNetworkPlugins/group/custom-flannel/NetCatPod 12.35
317 TestNetworkPlugins/group/custom-flannel/DNS 0.21
318 TestNetworkPlugins/group/custom-flannel/Localhost 0.19
319 TestNetworkPlugins/group/custom-flannel/HairPin 0.18
320 TestNetworkPlugins/group/bridge/Start 108.73
321 TestNetworkPlugins/group/calico/ControllerPod 6.01
322 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.25
323 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.28
324 TestNetworkPlugins/group/calico/KubeletFlags 0.29
325 TestNetworkPlugins/group/calico/NetCatPod 11.35
326 TestNetworkPlugins/group/enable-default-cni/DNS 0.21
327 TestNetworkPlugins/group/enable-default-cni/Localhost 0.16
328 TestNetworkPlugins/group/enable-default-cni/HairPin 0.16
329 TestNetworkPlugins/group/calico/DNS 0.24
330 TestNetworkPlugins/group/calico/Localhost 0.18
331 TestNetworkPlugins/group/calico/HairPin 0.15
333 TestStartStop/group/old-k8s-version/serial/FirstStart 136.17
335 TestStartStop/group/no-preload/serial/FirstStart 149.77
336 TestNetworkPlugins/group/flannel/ControllerPod 6.01
337 TestNetworkPlugins/group/flannel/KubeletFlags 0.24
338 TestNetworkPlugins/group/flannel/NetCatPod 9.24
339 TestNetworkPlugins/group/flannel/DNS 0.22
340 TestNetworkPlugins/group/flannel/Localhost 0.19
341 TestNetworkPlugins/group/flannel/HairPin 0.19
343 TestStartStop/group/embed-certs/serial/FirstStart 71.91
344 TestNetworkPlugins/group/bridge/KubeletFlags 0.25
345 TestNetworkPlugins/group/bridge/NetCatPod 10.3
346 TestNetworkPlugins/group/bridge/DNS 0.24
347 TestNetworkPlugins/group/bridge/Localhost 0.25
348 TestNetworkPlugins/group/bridge/HairPin 0.19
350 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 107.17
351 TestStartStop/group/embed-certs/serial/DeployApp 7.31
352 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.35
353 TestStartStop/group/embed-certs/serial/Stop 92.52
354 TestStartStop/group/old-k8s-version/serial/DeployApp 8.56
355 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.2
356 TestStartStop/group/old-k8s-version/serial/Stop 92.53
357 TestStartStop/group/no-preload/serial/DeployApp 8.35
358 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.22
359 TestStartStop/group/no-preload/serial/Stop 92.53
360 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.33
361 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.34
362 TestStartStop/group/default-k8s-diff-port/serial/Stop 92.53
363 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.22
364 TestStartStop/group/embed-certs/serial/SecondStart 324.99
365 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.23
367 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.28
368 TestStartStop/group/no-preload/serial/SecondStart 329.18
369 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.24
370 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 300.04
371 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 16.01
372 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.11
373 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.28
374 TestStartStop/group/embed-certs/serial/Pause 3.35
376 TestStartStop/group/newest-cni/serial/FirstStart 61.03
377 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 11.01
378 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.12
379 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.32
380 TestStartStop/group/no-preload/serial/Pause 4.21
381 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
382 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.09
383 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.26
384 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.21
385 TestStartStop/group/newest-cni/serial/DeployApp 0
386 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.28
387 TestStartStop/group/newest-cni/serial/Stop 2.46
388 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.23
389 TestStartStop/group/newest-cni/serial/SecondStart 38.49
390 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
391 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
392 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.29
393 TestStartStop/group/newest-cni/serial/Pause 2.9
394 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
395 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.08
396 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.27
397 TestStartStop/group/old-k8s-version/serial/Pause 2.83
x
+
TestDownloadOnly/v1.20.0/json-events (9.48s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-615562 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-615562 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd: (9.477139067s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (9.48s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-615562
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-615562: exit status 85 (81.792491ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-615562 | jenkins | v1.32.0 | 16 Mar 24 16:55 UTC |          |
	|         | -p download-only-615562        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/16 16:55:13
	Running on machine: ubuntu-20-agent-12
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0316 16:55:13.615884  788454 out.go:291] Setting OutFile to fd 1 ...
	I0316 16:55:13.616086  788454 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0316 16:55:13.616098  788454 out.go:304] Setting ErrFile to fd 2...
	I0316 16:55:13.616105  788454 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0316 16:55:13.616334  788454 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18277-781196/.minikube/bin
	W0316 16:55:13.616480  788454 root.go:314] Error reading config file at /home/jenkins/minikube-integration/18277-781196/.minikube/config/config.json: open /home/jenkins/minikube-integration/18277-781196/.minikube/config/config.json: no such file or directory
	I0316 16:55:13.617157  788454 out.go:298] Setting JSON to true
	I0316 16:55:13.618101  788454 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":81461,"bootTime":1710526653,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0316 16:55:13.618202  788454 start.go:139] virtualization: kvm guest
	I0316 16:55:13.620898  788454 out.go:97] [download-only-615562] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	W0316 16:55:13.621074  788454 preload.go:294] Failed to list preload files: open /home/jenkins/minikube-integration/18277-781196/.minikube/cache/preloaded-tarball: no such file or directory
	I0316 16:55:13.622496  788454 out.go:169] MINIKUBE_LOCATION=18277
	I0316 16:55:13.621148  788454 notify.go:220] Checking for updates...
	I0316 16:55:13.625090  788454 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0316 16:55:13.626467  788454 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18277-781196/kubeconfig
	I0316 16:55:13.627837  788454 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18277-781196/.minikube
	I0316 16:55:13.629038  788454 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0316 16:55:13.631509  788454 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0316 16:55:13.631850  788454 driver.go:392] Setting default libvirt URI to qemu:///system
	I0316 16:55:13.669841  788454 out.go:97] Using the kvm2 driver based on user configuration
	I0316 16:55:13.669878  788454 start.go:297] selected driver: kvm2
	I0316 16:55:13.669895  788454 start.go:901] validating driver "kvm2" against <nil>
	I0316 16:55:13.670278  788454 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0316 16:55:13.670429  788454 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18277-781196/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0316 16:55:13.688581  788454 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0316 16:55:13.688648  788454 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0316 16:55:13.689151  788454 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0316 16:55:13.689303  788454 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0316 16:55:13.689376  788454 cni.go:84] Creating CNI manager for ""
	I0316 16:55:13.689408  788454 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0316 16:55:13.689417  788454 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0316 16:55:13.689481  788454 start.go:340] cluster config:
	{Name:download-only-615562 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-615562 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0316 16:55:13.689675  788454 iso.go:125] acquiring lock: {Name:mk48d016d8d435147389d59734ec7ed09e828db8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0316 16:55:13.691633  788454 out.go:97] Downloading VM boot image ...
	I0316 16:55:13.691692  788454 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso.sha256 -> /home/jenkins/minikube-integration/18277-781196/.minikube/cache/iso/amd64/minikube-v1.32.1-1710573846-18277-amd64.iso
	I0316 16:55:16.899781  788454 out.go:97] Starting "download-only-615562" primary control-plane node in "download-only-615562" cluster
	I0316 16:55:16.899819  788454 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0316 16:55:16.929227  788454 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4
	I0316 16:55:16.929263  788454 cache.go:56] Caching tarball of preloaded images
	I0316 16:55:16.929433  788454 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0316 16:55:16.931463  788454 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0316 16:55:16.931503  788454 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4 ...
	I0316 16:55:16.957735  788454 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4?checksum=md5:c28dc5b6f01e4b826afa7afc8a0fd1fd -> /home/jenkins/minikube-integration/18277-781196/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-615562 host does not exist
	  To start a cluster, run: "minikube start -p download-only-615562"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.17s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.17s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-615562
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/json-events (6.49s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-138192 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-138192 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd: (6.488320582s)
--- PASS: TestDownloadOnly/v1.28.4/json-events (6.49s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/preload-exists
--- PASS: TestDownloadOnly/v1.28.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-138192
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-138192: exit status 85 (81.371462ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-615562 | jenkins | v1.32.0 | 16 Mar 24 16:55 UTC |                     |
	|         | -p download-only-615562        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.32.0 | 16 Mar 24 16:55 UTC | 16 Mar 24 16:55 UTC |
	| delete  | -p download-only-615562        | download-only-615562 | jenkins | v1.32.0 | 16 Mar 24 16:55 UTC | 16 Mar 24 16:55 UTC |
	| start   | -o=json --download-only        | download-only-138192 | jenkins | v1.32.0 | 16 Mar 24 16:55 UTC |                     |
	|         | -p download-only-138192        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/16 16:55:23
	Running on machine: ubuntu-20-agent-12
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0316 16:55:23.500090  788619 out.go:291] Setting OutFile to fd 1 ...
	I0316 16:55:23.500245  788619 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0316 16:55:23.500255  788619 out.go:304] Setting ErrFile to fd 2...
	I0316 16:55:23.500261  788619 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0316 16:55:23.500489  788619 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18277-781196/.minikube/bin
	I0316 16:55:23.501133  788619 out.go:298] Setting JSON to true
	I0316 16:55:23.502062  788619 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":81471,"bootTime":1710526653,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0316 16:55:23.502140  788619 start.go:139] virtualization: kvm guest
	I0316 16:55:23.504469  788619 out.go:97] [download-only-138192] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0316 16:55:23.506258  788619 out.go:169] MINIKUBE_LOCATION=18277
	I0316 16:55:23.504719  788619 notify.go:220] Checking for updates...
	I0316 16:55:23.509418  788619 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0316 16:55:23.511016  788619 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18277-781196/kubeconfig
	I0316 16:55:23.512474  788619 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18277-781196/.minikube
	I0316 16:55:23.513823  788619 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0316 16:55:23.516352  788619 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0316 16:55:23.516644  788619 driver.go:392] Setting default libvirt URI to qemu:///system
	I0316 16:55:23.552715  788619 out.go:97] Using the kvm2 driver based on user configuration
	I0316 16:55:23.552767  788619 start.go:297] selected driver: kvm2
	I0316 16:55:23.552788  788619 start.go:901] validating driver "kvm2" against <nil>
	I0316 16:55:23.553294  788619 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0316 16:55:23.553418  788619 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18277-781196/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0316 16:55:23.570654  788619 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0316 16:55:23.570759  788619 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0316 16:55:23.571306  788619 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0316 16:55:23.571488  788619 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0316 16:55:23.571556  788619 cni.go:84] Creating CNI manager for ""
	I0316 16:55:23.571569  788619 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0316 16:55:23.571578  788619 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0316 16:55:23.571644  788619 start.go:340] cluster config:
	{Name:download-only-138192 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-138192 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0316 16:55:23.571743  788619 iso.go:125] acquiring lock: {Name:mk48d016d8d435147389d59734ec7ed09e828db8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0316 16:55:23.573511  788619 out.go:97] Starting "download-only-138192" primary control-plane node in "download-only-138192" cluster
	I0316 16:55:23.573551  788619 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0316 16:55:23.606276  788619 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-amd64.tar.lz4
	I0316 16:55:23.606314  788619 cache.go:56] Caching tarball of preloaded images
	I0316 16:55:23.606493  788619 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime containerd
	I0316 16:55:23.608409  788619 out.go:97] Downloading Kubernetes v1.28.4 preload ...
	I0316 16:55:23.608440  788619 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-amd64.tar.lz4 ...
	I0316 16:55:23.635826  788619 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-amd64.tar.lz4?checksum=md5:36bbd14dd3f64efb2d3840dd67e48180 -> /home/jenkins/minikube-integration/18277-781196/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-amd64.tar.lz4
	I0316 16:55:28.258508  788619 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-amd64.tar.lz4 ...
	I0316 16:55:28.258658  788619 preload.go:255] verifying checksum of /home/jenkins/minikube-integration/18277-781196/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-containerd-overlay2-amd64.tar.lz4 ...
	
	
	* The control-plane node download-only-138192 host does not exist
	  To start a cluster, run: "minikube start -p download-only-138192"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.4/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAll (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.4/DeleteAll (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-138192
--- PASS: TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/json-events (5.93s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-680461 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-680461 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd: (5.933362779s)
--- PASS: TestDownloadOnly/v1.29.0-rc.2/json-events (5.93s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/preload-exists
--- PASS: TestDownloadOnly/v1.29.0-rc.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-680461
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-680461: exit status 85 (79.429122ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only           | download-only-615562 | jenkins | v1.32.0 | 16 Mar 24 16:55 UTC |                     |
	|         | -p download-only-615562           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0      |                      |         |         |                     |                     |
	|         | --container-runtime=containerd    |                      |         |         |                     |                     |
	|         | --driver=kvm2                     |                      |         |         |                     |                     |
	|         | --container-runtime=containerd    |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 16 Mar 24 16:55 UTC | 16 Mar 24 16:55 UTC |
	| delete  | -p download-only-615562           | download-only-615562 | jenkins | v1.32.0 | 16 Mar 24 16:55 UTC | 16 Mar 24 16:55 UTC |
	| start   | -o=json --download-only           | download-only-138192 | jenkins | v1.32.0 | 16 Mar 24 16:55 UTC |                     |
	|         | -p download-only-138192           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4      |                      |         |         |                     |                     |
	|         | --container-runtime=containerd    |                      |         |         |                     |                     |
	|         | --driver=kvm2                     |                      |         |         |                     |                     |
	|         | --container-runtime=containerd    |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 16 Mar 24 16:55 UTC | 16 Mar 24 16:55 UTC |
	| delete  | -p download-only-138192           | download-only-138192 | jenkins | v1.32.0 | 16 Mar 24 16:55 UTC | 16 Mar 24 16:55 UTC |
	| start   | -o=json --download-only           | download-only-680461 | jenkins | v1.32.0 | 16 Mar 24 16:55 UTC |                     |
	|         | -p download-only-680461           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2 |                      |         |         |                     |                     |
	|         | --container-runtime=containerd    |                      |         |         |                     |                     |
	|         | --driver=kvm2                     |                      |         |         |                     |                     |
	|         | --container-runtime=containerd    |                      |         |         |                     |                     |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/16 16:55:30
	Running on machine: ubuntu-20-agent-12
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0316 16:55:30.378277  788783 out.go:291] Setting OutFile to fd 1 ...
	I0316 16:55:30.378469  788783 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0316 16:55:30.378484  788783 out.go:304] Setting ErrFile to fd 2...
	I0316 16:55:30.378490  788783 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0316 16:55:30.378708  788783 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18277-781196/.minikube/bin
	I0316 16:55:30.379369  788783 out.go:298] Setting JSON to true
	I0316 16:55:30.380423  788783 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":81478,"bootTime":1710526653,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0316 16:55:30.380522  788783 start.go:139] virtualization: kvm guest
	I0316 16:55:30.382947  788783 out.go:97] [download-only-680461] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0316 16:55:30.384669  788783 out.go:169] MINIKUBE_LOCATION=18277
	I0316 16:55:30.383217  788783 notify.go:220] Checking for updates...
	I0316 16:55:30.387343  788783 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0316 16:55:30.388712  788783 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18277-781196/kubeconfig
	I0316 16:55:30.390186  788783 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18277-781196/.minikube
	I0316 16:55:30.391473  788783 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0316 16:55:30.394043  788783 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0316 16:55:30.394334  788783 driver.go:392] Setting default libvirt URI to qemu:///system
	I0316 16:55:30.428646  788783 out.go:97] Using the kvm2 driver based on user configuration
	I0316 16:55:30.428698  788783 start.go:297] selected driver: kvm2
	I0316 16:55:30.428713  788783 start.go:901] validating driver "kvm2" against <nil>
	I0316 16:55:30.429160  788783 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0316 16:55:30.429251  788783 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18277-781196/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0316 16:55:30.446272  788783 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0316 16:55:30.446401  788783 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0316 16:55:30.447365  788783 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0316 16:55:30.447599  788783 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0316 16:55:30.447698  788783 cni.go:84] Creating CNI manager for ""
	I0316 16:55:30.447719  788783 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0316 16:55:30.447739  788783 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0316 16:55:30.447813  788783 start.go:340] cluster config:
	{Name:download-only-680461 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:download-only-680461 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}

                                                
                                                
	I0316 16:55:30.447913  788783 iso.go:125] acquiring lock: {Name:mk48d016d8d435147389d59734ec7ed09e828db8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0316 16:55:30.449552  788783 out.go:97] Starting "download-only-680461" primary control-plane node in "download-only-680461" cluster
	I0316 16:55:30.449577  788783 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime containerd
	I0316 16:55:30.479416  788783 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-containerd-overlay2-amd64.tar.lz4
	I0316 16:55:30.479483  788783 cache.go:56] Caching tarball of preloaded images
	I0316 16:55:30.479666  788783 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime containerd
	I0316 16:55:30.481427  788783 out.go:97] Downloading Kubernetes v1.29.0-rc.2 preload ...
	I0316 16:55:30.481462  788783 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-containerd-overlay2-amd64.tar.lz4 ...
	I0316 16:55:30.510110  788783 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-containerd-overlay2-amd64.tar.lz4?checksum=md5:e143dbc3b8285cd3241a841ac2b6b7fc -> /home/jenkins/minikube-integration/18277-781196/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-containerd-overlay2-amd64.tar.lz4
	I0316 16:55:34.759875  788783 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-containerd-overlay2-amd64.tar.lz4 ...
	I0316 16:55:34.759984  788783 preload.go:255] verifying checksum of /home/jenkins/minikube-integration/18277-781196/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-containerd-overlay2-amd64.tar.lz4 ...
	
	
	* The control-plane node download-only-680461 host does not exist
	  To start a cluster, run: "minikube start -p download-only-680461"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAll (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAll (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-680461
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestBinaryMirror (0.59s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-136570 --alsologtostderr --binary-mirror http://127.0.0.1:34371 --driver=kvm2  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-136570" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-136570
--- PASS: TestBinaryMirror (0.59s)

                                                
                                    
x
+
TestOffline (103.46s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-containerd-404386 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=containerd
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-containerd-404386 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=containerd: (1m42.309343078s)
helpers_test.go:175: Cleaning up "offline-containerd-404386" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-containerd-404386
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-containerd-404386: (1.146517383s)
--- PASS: TestOffline (103.46s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-867363
addons_test.go:928: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-867363: exit status 85 (65.362571ms)

                                                
                                                
-- stdout --
	* Profile "addons-867363" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-867363"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-867363
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-867363: exit status 85 (65.713803ms)

                                                
                                                
-- stdout --
	* Profile "addons-867363" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-867363"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (146.01s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-amd64 start -p addons-867363 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=kvm2  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-linux-amd64 start -p addons-867363 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=kvm2  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m26.005622592s)
--- PASS: TestAddons/Setup (146.01s)

                                                
                                    
x
+
TestAddons/parallel/Registry (19.15s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 29.832525ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-flxp9" [f5b8a5ce-0a2a-4e49-82a8-d51c543360bb] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.008885101s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-k2sg7" [cedb82bc-835b-4744-9fa7-160d14fe91d7] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.012172732s
addons_test.go:340: (dbg) Run:  kubectl --context addons-867363 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-867363 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-867363 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.941536351s)
addons_test.go:359: (dbg) Run:  out/minikube-linux-amd64 -p addons-867363 ip
2024/03/16 16:58:21 [DEBUG] GET http://192.168.39.88:5000
addons_test.go:388: (dbg) Run:  out/minikube-linux-amd64 -p addons-867363 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (19.15s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (21.42s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-867363 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-867363 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-867363 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [365040cc-5fca-41f8-a33a-975c1839bbba] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [365040cc-5fca-41f8-a33a-975c1839bbba] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.004678698s
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-867363 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:286: (dbg) Run:  kubectl --context addons-867363 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-867363 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.88
addons_test.go:306: (dbg) Run:  out/minikube-linux-amd64 -p addons-867363 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Done: out/minikube-linux-amd64 -p addons-867363 addons disable ingress-dns --alsologtostderr -v=1: (1.569963632s)
addons_test.go:311: (dbg) Run:  out/minikube-linux-amd64 -p addons-867363 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-amd64 -p addons-867363 addons disable ingress --alsologtostderr -v=1: (8.282286383s)
--- PASS: TestAddons/parallel/Ingress (21.42s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.96s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-v6c24" [e25ecd53-0584-4484-bea1-7cc58c9ef4b4] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.005491645s
addons_test.go:841: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-867363
addons_test.go:841: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-867363: (5.949589978s)
--- PASS: TestAddons/parallel/InspektorGadget (11.96s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.91s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 29.895228ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-69cf46c98-dwwt4" [b4c1534e-ef05-4b25-8ebe-3d3daf490a77] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.006273318s
addons_test.go:415: (dbg) Run:  kubectl --context addons-867363 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-linux-amd64 -p addons-867363 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.91s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (12.04s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:456: tiller-deploy stabilized in 4.007188ms
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-rbwnr" [805b7653-3d0a-4bbb-bb41-552f85b6e519] Running
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 6.005602059s
addons_test.go:473: (dbg) Run:  kubectl --context addons-867363 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:473: (dbg) Done: kubectl --context addons-867363 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (5.025900507s)
addons_test.go:490: (dbg) Run:  out/minikube-linux-amd64 -p addons-867363 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (12.04s)

                                                
                                    
x
+
TestAddons/parallel/CSI (60.56s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 31.544851ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-867363 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-867363 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-867363 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-867363 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-867363 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-867363 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-867363 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-867363 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-867363 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-867363 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-867363 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-867363 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-867363 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-867363 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-867363 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-867363 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-867363 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-867363 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-867363 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [c1d7126d-9337-41cd-bfd5-939db6575d56] Pending
helpers_test.go:344: "task-pv-pod" [c1d7126d-9337-41cd-bfd5-939db6575d56] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [c1d7126d-9337-41cd-bfd5-939db6575d56] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 13.004927036s
addons_test.go:584: (dbg) Run:  kubectl --context addons-867363 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-867363 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-867363 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-867363 delete pod task-pv-pod
addons_test.go:600: (dbg) Run:  kubectl --context addons-867363 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-867363 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-867363 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-867363 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-867363 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-867363 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-867363 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-867363 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-867363 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-867363 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-867363 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-867363 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-867363 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-867363 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-867363 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-867363 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [1cb35263-076a-4352-9a12-7dbdeb23bcb7] Pending
helpers_test.go:344: "task-pv-pod-restore" [1cb35263-076a-4352-9a12-7dbdeb23bcb7] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [1cb35263-076a-4352-9a12-7dbdeb23bcb7] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.004513197s
addons_test.go:626: (dbg) Run:  kubectl --context addons-867363 delete pod task-pv-pod-restore
addons_test.go:630: (dbg) Run:  kubectl --context addons-867363 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-867363 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-linux-amd64 -p addons-867363 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-linux-amd64 -p addons-867363 addons disable csi-hostpath-driver --alsologtostderr -v=1: (7.02305359s)
addons_test.go:642: (dbg) Run:  out/minikube-linux-amd64 -p addons-867363 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (60.56s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (13.84s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-867363 --alsologtostderr -v=1
addons_test.go:824: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-867363 --alsologtostderr -v=1: (1.832409581s)
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-5485c556b-5btgp" [ba93a5c8-038e-45e8-851b-36997ab33fb6] Pending
helpers_test.go:344: "headlamp-5485c556b-5btgp" [ba93a5c8-038e-45e8-851b-36997ab33fb6] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-5485c556b-5btgp" [ba93a5c8-038e-45e8-851b-36997ab33fb6] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.006639114s
--- PASS: TestAddons/parallel/Headlamp (13.84s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (7s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6548d5df46-psg5n" [4a03ca3e-dbb1-46ab-a17a-418a1174c807] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.005053663s
addons_test.go:860: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-867363
--- PASS: TestAddons/parallel/CloudSpanner (7.00s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (60.06s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-867363 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-867363 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-867363 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-867363 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-867363 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-867363 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-867363 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-867363 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-867363 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-867363 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-867363 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-867363 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [634a0610-f76c-4816-ae98-e838394cd583] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [634a0610-f76c-4816-ae98-e838394cd583] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [634a0610-f76c-4816-ae98-e838394cd583] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 6.005226666s
addons_test.go:891: (dbg) Run:  kubectl --context addons-867363 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-linux-amd64 -p addons-867363 ssh "cat /opt/local-path-provisioner/pvc-8bdcfb88-24b5-4df4-a827-437db8476147_default_test-pvc/file1"
addons_test.go:912: (dbg) Run:  kubectl --context addons-867363 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-867363 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-linux-amd64 -p addons-867363 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:920: (dbg) Done: out/minikube-linux-amd64 -p addons-867363 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (44.071314726s)
--- PASS: TestAddons/parallel/LocalPath (60.06s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.86s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-nq772" [1407d4ee-21c9-447a-baf1-2234fbedcc67] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.007074669s
addons_test.go:955: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-867363
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.86s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-9947fc6bf-t874r" [4fa02490-4ce7-4aea-bf9d-69d839b08843] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004726512s
--- PASS: TestAddons/parallel/Yakd (6.01s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-867363 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-867363 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (92.82s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-867363
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-867363: (1m32.46770524s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-867363
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-867363
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-867363
--- PASS: TestAddons/StoppedEnableDisable (92.82s)

                                                
                                    
x
+
TestCertOptions (79.87s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-297130 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=containerd
E0316 17:59:56.779033  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/functional-344728/client.crt: no such file or directory
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-297130 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=containerd: (1m18.230994069s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-297130 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-297130 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-297130 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-297130" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-297130
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-297130: (1.106836486s)
--- PASS: TestCertOptions (79.87s)

                                                
                                    
x
+
TestCertExpiration (267.3s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-802262 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=containerd
E0316 17:59:39.826018  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/functional-344728/client.crt: no such file or directory
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-802262 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=containerd: (1m9.732502577s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-802262 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-802262 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=containerd: (16.468433406s)
helpers_test.go:175: Cleaning up "cert-expiration-802262" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-802262
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-802262: (1.100505825s)
--- PASS: TestCertExpiration (267.30s)

                                                
                                    
x
+
TestForceSystemdFlag (89.05s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-637166 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-637166 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (1m27.703885519s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-637166 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-637166" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-637166
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-637166: (1.109033872s)
--- PASS: TestForceSystemdFlag (89.05s)

                                                
                                    
x
+
TestForceSystemdEnv (77.43s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-478037 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-478037 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (1m16.32806247s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-478037 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-478037" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-478037
--- PASS: TestForceSystemdEnv (77.43s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (3.17s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (3.17s)

                                                
                                    
x
+
TestErrorSpam/setup (46.69s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-979409 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-979409 --driver=kvm2  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-979409 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-979409 --driver=kvm2  --container-runtime=containerd: (46.691805703s)
--- PASS: TestErrorSpam/setup (46.69s)

                                                
                                    
x
+
TestErrorSpam/start (0.42s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-979409 --log_dir /tmp/nospam-979409 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-979409 --log_dir /tmp/nospam-979409 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-979409 --log_dir /tmp/nospam-979409 start --dry-run
--- PASS: TestErrorSpam/start (0.42s)

                                                
                                    
x
+
TestErrorSpam/status (0.84s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-979409 --log_dir /tmp/nospam-979409 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-979409 --log_dir /tmp/nospam-979409 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-979409 --log_dir /tmp/nospam-979409 status
--- PASS: TestErrorSpam/status (0.84s)

                                                
                                    
x
+
TestErrorSpam/pause (1.79s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-979409 --log_dir /tmp/nospam-979409 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-979409 --log_dir /tmp/nospam-979409 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-979409 --log_dir /tmp/nospam-979409 pause
--- PASS: TestErrorSpam/pause (1.79s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.94s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-979409 --log_dir /tmp/nospam-979409 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-979409 --log_dir /tmp/nospam-979409 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-979409 --log_dir /tmp/nospam-979409 unpause
--- PASS: TestErrorSpam/unpause (1.94s)

                                                
                                    
x
+
TestErrorSpam/stop (5.34s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-979409 --log_dir /tmp/nospam-979409 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-979409 --log_dir /tmp/nospam-979409 stop: (2.308358379s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-979409 --log_dir /tmp/nospam-979409 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-979409 --log_dir /tmp/nospam-979409 stop: (1.743238459s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-979409 --log_dir /tmp/nospam-979409 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-979409 --log_dir /tmp/nospam-979409 stop: (1.283558349s)
--- PASS: TestErrorSpam/stop (5.34s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/18277-781196/.minikube/files/etc/test/nested/copy/788442/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (67.74s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-344728 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=containerd
E0316 17:03:03.749812  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/addons-867363/client.crt: no such file or directory
E0316 17:03:03.755762  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/addons-867363/client.crt: no such file or directory
E0316 17:03:03.766116  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/addons-867363/client.crt: no such file or directory
E0316 17:03:03.786505  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/addons-867363/client.crt: no such file or directory
E0316 17:03:03.826899  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/addons-867363/client.crt: no such file or directory
E0316 17:03:03.907329  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/addons-867363/client.crt: no such file or directory
E0316 17:03:04.067807  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/addons-867363/client.crt: no such file or directory
E0316 17:03:04.388471  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/addons-867363/client.crt: no such file or directory
E0316 17:03:05.029478  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/addons-867363/client.crt: no such file or directory
E0316 17:03:06.310300  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/addons-867363/client.crt: no such file or directory
E0316 17:03:08.870502  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/addons-867363/client.crt: no such file or directory
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-344728 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=containerd: (1m7.740853756s)
--- PASS: TestFunctional/serial/StartWithProxy (67.74s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (43.68s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-344728 --alsologtostderr -v=8
E0316 17:03:13.990790  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/addons-867363/client.crt: no such file or directory
E0316 17:03:24.231462  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/addons-867363/client.crt: no such file or directory
E0316 17:03:44.712657  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/addons-867363/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-344728 --alsologtostderr -v=8: (43.675942584s)
functional_test.go:659: soft start took 43.676660631s for "functional-344728" cluster.
--- PASS: TestFunctional/serial/SoftStart (43.68s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-344728 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.38s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-344728 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-344728 cache add registry.k8s.io/pause:3.1: (1.566199895s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-344728 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-344728 cache add registry.k8s.io/pause:3.3: (1.46213014s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-344728 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-344728 cache add registry.k8s.io/pause:latest: (1.34733368s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.38s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.2s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-344728 /tmp/TestFunctionalserialCacheCmdcacheadd_local2693528137/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-344728 cache add minikube-local-cache-test:functional-344728
functional_test.go:1085: (dbg) Done: out/minikube-linux-amd64 -p functional-344728 cache add minikube-local-cache-test:functional-344728: (1.78985338s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-344728 cache delete minikube-local-cache-test:functional-344728
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-344728
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.20s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-344728 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-344728 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-344728 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-344728 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (245.56668ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-344728 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-linux-amd64 -p functional-344728 cache reload: (1.304361977s)
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-344728 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-344728 kubectl -- --context functional-344728 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-344728 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (44.42s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-344728 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0316 17:04:25.674494  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/addons-867363/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-344728 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (44.421620339s)
functional_test.go:757: restart took 44.421779749s for "functional-344728" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (44.42s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-344728 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.08s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.72s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-344728 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-344728 logs: (1.724497508s)
--- PASS: TestFunctional/serial/LogsCmd (1.72s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.77s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-344728 logs --file /tmp/TestFunctionalserialLogsFileCmd1498335012/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-344728 logs --file /tmp/TestFunctionalserialLogsFileCmd1498335012/001/logs.txt: (1.771788636s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.77s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.62s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-344728 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-344728
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-344728: exit status 115 (336.388699ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.188:30542 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-344728 delete -f testdata/invalidsvc.yaml
functional_test.go:2323: (dbg) Done: kubectl --context functional-344728 delete -f testdata/invalidsvc.yaml: (1.038669879s)
--- PASS: TestFunctional/serial/InvalidService (4.62s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-344728 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-344728 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-344728 config get cpus: exit status 14 (96.440136ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-344728 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-344728 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-344728 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-344728 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-344728 config get cpus: exit status 14 (68.081503ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (25.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-344728 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-344728 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 796150: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (25.07s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-344728 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-344728 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd: exit status 23 (180.59056ms)

                                                
                                                
-- stdout --
	* [functional-344728] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18277
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18277-781196/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18277-781196/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0316 17:05:10.070737  795634 out.go:291] Setting OutFile to fd 1 ...
	I0316 17:05:10.070902  795634 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0316 17:05:10.070913  795634 out.go:304] Setting ErrFile to fd 2...
	I0316 17:05:10.070920  795634 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0316 17:05:10.073043  795634 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18277-781196/.minikube/bin
	I0316 17:05:10.074077  795634 out.go:298] Setting JSON to false
	I0316 17:05:10.075193  795634 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":82057,"bootTime":1710526653,"procs":241,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0316 17:05:10.075286  795634 start.go:139] virtualization: kvm guest
	I0316 17:05:10.077527  795634 out.go:177] * [functional-344728] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0316 17:05:10.079288  795634 out.go:177]   - MINIKUBE_LOCATION=18277
	I0316 17:05:10.080503  795634 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0316 17:05:10.079344  795634 notify.go:220] Checking for updates...
	I0316 17:05:10.081962  795634 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18277-781196/kubeconfig
	I0316 17:05:10.083321  795634 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18277-781196/.minikube
	I0316 17:05:10.084651  795634 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0316 17:05:10.086241  795634 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0316 17:05:10.088521  795634 config.go:182] Loaded profile config "functional-344728": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0316 17:05:10.089207  795634 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0316 17:05:10.089330  795634 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 17:05:10.106994  795634 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37067
	I0316 17:05:10.107549  795634 main.go:141] libmachine: () Calling .GetVersion
	I0316 17:05:10.108299  795634 main.go:141] libmachine: Using API Version  1
	I0316 17:05:10.108339  795634 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 17:05:10.108756  795634 main.go:141] libmachine: () Calling .GetMachineName
	I0316 17:05:10.108955  795634 main.go:141] libmachine: (functional-344728) Calling .DriverName
	I0316 17:05:10.109287  795634 driver.go:392] Setting default libvirt URI to qemu:///system
	I0316 17:05:10.109775  795634 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0316 17:05:10.109841  795634 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 17:05:10.126968  795634 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34535
	I0316 17:05:10.127458  795634 main.go:141] libmachine: () Calling .GetVersion
	I0316 17:05:10.128125  795634 main.go:141] libmachine: Using API Version  1
	I0316 17:05:10.128165  795634 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 17:05:10.128593  795634 main.go:141] libmachine: () Calling .GetMachineName
	I0316 17:05:10.128897  795634 main.go:141] libmachine: (functional-344728) Calling .DriverName
	I0316 17:05:10.167166  795634 out.go:177] * Using the kvm2 driver based on existing profile
	I0316 17:05:10.168367  795634 start.go:297] selected driver: kvm2
	I0316 17:05:10.168399  795634 start.go:901] validating driver "kvm2" against &{Name:functional-344728 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:functional-344728 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.188 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2
6280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0316 17:05:10.168561  795634 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0316 17:05:10.170554  795634 out.go:177] 
	W0316 17:05:10.171785  795634 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0316 17:05:10.173104  795634 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-344728 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-344728 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-344728 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd: exit status 23 (165.343921ms)

                                                
                                                
-- stdout --
	* [functional-344728] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18277
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18277-781196/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18277-781196/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0316 17:05:10.419476  795718 out.go:291] Setting OutFile to fd 1 ...
	I0316 17:05:10.419601  795718 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0316 17:05:10.419609  795718 out.go:304] Setting ErrFile to fd 2...
	I0316 17:05:10.419613  795718 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0316 17:05:10.419957  795718 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18277-781196/.minikube/bin
	I0316 17:05:10.420608  795718 out.go:298] Setting JSON to false
	I0316 17:05:10.421672  795718 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":82058,"bootTime":1710526653,"procs":244,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0316 17:05:10.421761  795718 start.go:139] virtualization: kvm guest
	I0316 17:05:10.423941  795718 out.go:177] * [functional-344728] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	I0316 17:05:10.425465  795718 out.go:177]   - MINIKUBE_LOCATION=18277
	I0316 17:05:10.425519  795718 notify.go:220] Checking for updates...
	I0316 17:05:10.426922  795718 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0316 17:05:10.428380  795718 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18277-781196/kubeconfig
	I0316 17:05:10.429759  795718 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18277-781196/.minikube
	I0316 17:05:10.431202  795718 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0316 17:05:10.432522  795718 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0316 17:05:10.434463  795718 config.go:182] Loaded profile config "functional-344728": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0316 17:05:10.435157  795718 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0316 17:05:10.435242  795718 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 17:05:10.452412  795718 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40209
	I0316 17:05:10.453019  795718 main.go:141] libmachine: () Calling .GetVersion
	I0316 17:05:10.453794  795718 main.go:141] libmachine: Using API Version  1
	I0316 17:05:10.453830  795718 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 17:05:10.454312  795718 main.go:141] libmachine: () Calling .GetMachineName
	I0316 17:05:10.454599  795718 main.go:141] libmachine: (functional-344728) Calling .DriverName
	I0316 17:05:10.454935  795718 driver.go:392] Setting default libvirt URI to qemu:///system
	I0316 17:05:10.455266  795718 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0316 17:05:10.455313  795718 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 17:05:10.472522  795718 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36745
	I0316 17:05:10.473124  795718 main.go:141] libmachine: () Calling .GetVersion
	I0316 17:05:10.473720  795718 main.go:141] libmachine: Using API Version  1
	I0316 17:05:10.473750  795718 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 17:05:10.474151  795718 main.go:141] libmachine: () Calling .GetMachineName
	I0316 17:05:10.474416  795718 main.go:141] libmachine: (functional-344728) Calling .DriverName
	I0316 17:05:10.512900  795718 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0316 17:05:10.514324  795718 start.go:297] selected driver: kvm2
	I0316 17:05:10.514344  795718 start.go:901] validating driver "kvm2" against &{Name:functional-344728 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:functional-344728 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.188 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2
6280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0316 17:05:10.514539  795718 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0316 17:05:10.516685  795718 out.go:177] 
	W0316 17:05:10.517810  795718 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0316 17:05:10.518995  795718 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-344728 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-344728 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-344728 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.97s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-344728 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-344728 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-8wj9z" [a9aa78c4-43a5-4245-b0c5-f2ec3d3b8c88] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-8wj9z" [a9aa78c4-43a5-4245-b0c5-f2ec3d3b8c88] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.005734322s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 -p functional-344728 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.39.188:32719
functional_test.go:1671: http://192.168.39.188:32719: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-55497b8b78-8wj9z

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.188:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.188:32719
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (7.65s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-amd64 -p functional-344728 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-amd64 -p functional-344728 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (42.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [dc3a074a-cbe9-45dc-9e56-f255d5f20b9a] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.008978925s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-344728 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-344728 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-344728 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-344728 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-344728 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [d4a2d7d9-7476-4e8e-a91a-7cbe9a532f14] Pending
helpers_test.go:344: "sp-pod" [d4a2d7d9-7476-4e8e-a91a-7cbe9a532f14] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [d4a2d7d9-7476-4e8e-a91a-7cbe9a532f14] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 15.004473749s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-344728 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-344728 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-344728 delete -f testdata/storage-provisioner/pod.yaml: (1.844657359s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-344728 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [deefc8e1-20fd-41a1-93a7-67c3419e84fc] Pending
helpers_test.go:344: "sp-pod" [deefc8e1-20fd-41a1-93a7-67c3419e84fc] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [deefc8e1-20fd-41a1-93a7-67c3419e84fc] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 17.009449577s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-344728 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (42.92s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-amd64 -p functional-344728 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-amd64 -p functional-344728 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-344728 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-344728 ssh -n functional-344728 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-344728 cp functional-344728:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3473760730/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-344728 ssh -n functional-344728 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-344728 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-344728 ssh -n functional-344728 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.76s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (31.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-344728 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-lhmss" [b06c9e86-3e9f-4d5b-9145-eea03fb4b327] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-lhmss" [b06c9e86-3e9f-4d5b-9145-eea03fb4b327] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 22.005301028s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-344728 exec mysql-859648c796-lhmss -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-344728 exec mysql-859648c796-lhmss -- mysql -ppassword -e "show databases;": exit status 1 (245.919624ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-344728 exec mysql-859648c796-lhmss -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-344728 exec mysql-859648c796-lhmss -- mysql -ppassword -e "show databases;": exit status 1 (196.613827ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-344728 exec mysql-859648c796-lhmss -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-344728 exec mysql-859648c796-lhmss -- mysql -ppassword -e "show databases;": exit status 1 (188.272155ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
2024/03/16 17:05:37 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:1803: (dbg) Run:  kubectl --context functional-344728 exec mysql-859648c796-lhmss -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-344728 exec mysql-859648c796-lhmss -- mysql -ppassword -e "show databases;": exit status 1 (143.081896ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-344728 exec mysql-859648c796-lhmss -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (31.09s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/788442/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-344728 ssh "sudo cat /etc/test/nested/copy/788442/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/788442.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-344728 ssh "sudo cat /etc/ssl/certs/788442.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/788442.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-344728 ssh "sudo cat /usr/share/ca-certificates/788442.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-344728 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/7884422.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-344728 ssh "sudo cat /etc/ssl/certs/7884422.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/7884422.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-344728 ssh "sudo cat /usr/share/ca-certificates/7884422.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-344728 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.72s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-344728 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-344728 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-344728 ssh "sudo systemctl is-active docker": exit status 1 (256.331857ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-344728 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-344728 ssh "sudo systemctl is-active crio": exit status 1 (278.411821ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (11.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-344728 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-344728 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-njdzv" [cfe1d6b1-c719-41b1-8b35-33f30f9d163f] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-njdzv" [cfe1d6b1-c719-41b1-8b35-33f30f9d163f] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 11.007245717s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (11.35s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-344728 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-344728 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.96s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-344728 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-344728 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.4
registry.k8s.io/kube-proxy:v1.28.4
registry.k8s.io/kube-controller-manager:v1.28.4
registry.k8s.io/kube-apiserver:v1.28.4
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-344728
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-344728
docker.io/kindest/kindnetd:v20230809-80a64d96
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-344728 image ls --format short --alsologtostderr:
I0316 17:05:28.874949  796439 out.go:291] Setting OutFile to fd 1 ...
I0316 17:05:28.875546  796439 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0316 17:05:28.875566  796439 out.go:304] Setting ErrFile to fd 2...
I0316 17:05:28.875575  796439 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0316 17:05:28.875965  796439 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18277-781196/.minikube/bin
I0316 17:05:28.877359  796439 config.go:182] Loaded profile config "functional-344728": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0316 17:05:28.877573  796439 config.go:182] Loaded profile config "functional-344728": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0316 17:05:28.878177  796439 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0316 17:05:28.878249  796439 main.go:141] libmachine: Launching plugin server for driver kvm2
I0316 17:05:28.895598  796439 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38097
I0316 17:05:28.896223  796439 main.go:141] libmachine: () Calling .GetVersion
I0316 17:05:28.896896  796439 main.go:141] libmachine: Using API Version  1
I0316 17:05:28.896926  796439 main.go:141] libmachine: () Calling .SetConfigRaw
I0316 17:05:28.897460  796439 main.go:141] libmachine: () Calling .GetMachineName
I0316 17:05:28.897757  796439 main.go:141] libmachine: (functional-344728) Calling .GetState
I0316 17:05:28.900109  796439 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0316 17:05:28.900168  796439 main.go:141] libmachine: Launching plugin server for driver kvm2
I0316 17:05:28.917361  796439 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46673
I0316 17:05:28.917882  796439 main.go:141] libmachine: () Calling .GetVersion
I0316 17:05:28.918504  796439 main.go:141] libmachine: Using API Version  1
I0316 17:05:28.918548  796439 main.go:141] libmachine: () Calling .SetConfigRaw
I0316 17:05:28.918944  796439 main.go:141] libmachine: () Calling .GetMachineName
I0316 17:05:28.919218  796439 main.go:141] libmachine: (functional-344728) Calling .DriverName
I0316 17:05:28.919502  796439 ssh_runner.go:195] Run: systemctl --version
I0316 17:05:28.919532  796439 main.go:141] libmachine: (functional-344728) Calling .GetSSHHostname
I0316 17:05:28.922841  796439 main.go:141] libmachine: (functional-344728) DBG | domain functional-344728 has defined MAC address 52:54:00:c0:72:76 in network mk-functional-344728
I0316 17:05:28.923427  796439 main.go:141] libmachine: (functional-344728) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:72:76", ip: ""} in network mk-functional-344728: {Iface:virbr1 ExpiryTime:2024-03-16 18:02:19 +0000 UTC Type:0 Mac:52:54:00:c0:72:76 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:functional-344728 Clientid:01:52:54:00:c0:72:76}
I0316 17:05:28.923485  796439 main.go:141] libmachine: (functional-344728) DBG | domain functional-344728 has defined IP address 192.168.39.188 and MAC address 52:54:00:c0:72:76 in network mk-functional-344728
I0316 17:05:28.923668  796439 main.go:141] libmachine: (functional-344728) Calling .GetSSHPort
I0316 17:05:28.923920  796439 main.go:141] libmachine: (functional-344728) Calling .GetSSHKeyPath
I0316 17:05:28.924196  796439 main.go:141] libmachine: (functional-344728) Calling .GetSSHUsername
I0316 17:05:28.924376  796439 sshutil.go:53] new ssh client: &{IP:192.168.39.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18277-781196/.minikube/machines/functional-344728/id_rsa Username:docker}
I0316 17:05:29.013452  796439 ssh_runner.go:195] Run: sudo crictl images --output json
I0316 17:05:29.087979  796439 main.go:141] libmachine: Making call to close driver server
I0316 17:05:29.087998  796439 main.go:141] libmachine: (functional-344728) Calling .Close
I0316 17:05:29.088347  796439 main.go:141] libmachine: (functional-344728) DBG | Closing plugin on server side
I0316 17:05:29.088355  796439 main.go:141] libmachine: Successfully made call to close driver server
I0316 17:05:29.088392  796439 main.go:141] libmachine: Making call to close connection to plugin binary
I0316 17:05:29.088417  796439 main.go:141] libmachine: Making call to close driver server
I0316 17:05:29.088428  796439 main.go:141] libmachine: (functional-344728) Calling .Close
I0316 17:05:29.088713  796439 main.go:141] libmachine: Successfully made call to close driver server
I0316 17:05:29.088732  796439 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-344728 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-344728 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| gcr.io/google-containers/addon-resizer      | functional-344728  | sha256:ffd4cf | 10.8MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:6e38f4 | 9.06MB |
| registry.k8s.io/kube-scheduler              | v1.28.4            | sha256:e3db31 | 18.8MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:56cc51 | 2.4MB  |
| registry.k8s.io/coredns/coredns             | v1.10.1            | sha256:ead0a4 | 16.2MB |
| registry.k8s.io/kube-controller-manager     | v1.28.4            | sha256:d058aa | 33.4MB |
| registry.k8s.io/kube-proxy                  | v1.28.4            | sha256:83f6cc | 24.6MB |
| registry.k8s.io/pause                       | 3.9                | sha256:e6f181 | 322kB  |
| registry.k8s.io/pause                       | latest             | sha256:350b16 | 72.3kB |
| localhost/my-image                          | functional-344728  | sha256:30a859 | 775kB  |
| registry.k8s.io/kube-apiserver              | v1.28.4            | sha256:7fe0e6 | 34.7MB |
| registry.k8s.io/pause                       | 3.1                | sha256:da86e6 | 315kB  |
| registry.k8s.io/pause                       | 3.3                | sha256:0184c1 | 298kB  |
| docker.io/kindest/kindnetd                  | v20230809-80a64d96 | sha256:c7d129 | 27.7MB |
| docker.io/library/minikube-local-cache-test | functional-344728  | sha256:2f49ce | 1.01kB |
| docker.io/library/mysql                     | 5.7                | sha256:510733 | 138MB  |
| docker.io/library/nginx                     | latest             | sha256:92b11f | 70.5MB |
| registry.k8s.io/echoserver                  | 1.8                | sha256:82e4c8 | 46.2MB |
| registry.k8s.io/etcd                        | 3.5.9-0            | sha256:73deb9 | 103MB  |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-344728 image ls --format table --alsologtostderr:
I0316 17:05:34.618240  796618 out.go:291] Setting OutFile to fd 1 ...
I0316 17:05:34.618379  796618 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0316 17:05:34.618391  796618 out.go:304] Setting ErrFile to fd 2...
I0316 17:05:34.618396  796618 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0316 17:05:34.618621  796618 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18277-781196/.minikube/bin
I0316 17:05:34.619392  796618 config.go:182] Loaded profile config "functional-344728": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0316 17:05:34.619726  796618 config.go:182] Loaded profile config "functional-344728": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0316 17:05:34.620346  796618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0316 17:05:34.620410  796618 main.go:141] libmachine: Launching plugin server for driver kvm2
I0316 17:05:34.639119  796618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43279
I0316 17:05:34.639726  796618 main.go:141] libmachine: () Calling .GetVersion
I0316 17:05:34.640388  796618 main.go:141] libmachine: Using API Version  1
I0316 17:05:34.640444  796618 main.go:141] libmachine: () Calling .SetConfigRaw
I0316 17:05:34.640874  796618 main.go:141] libmachine: () Calling .GetMachineName
I0316 17:05:34.641109  796618 main.go:141] libmachine: (functional-344728) Calling .GetState
I0316 17:05:34.643340  796618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0316 17:05:34.643395  796618 main.go:141] libmachine: Launching plugin server for driver kvm2
I0316 17:05:34.659943  796618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46577
I0316 17:05:34.660501  796618 main.go:141] libmachine: () Calling .GetVersion
I0316 17:05:34.661028  796618 main.go:141] libmachine: Using API Version  1
I0316 17:05:34.661057  796618 main.go:141] libmachine: () Calling .SetConfigRaw
I0316 17:05:34.661403  796618 main.go:141] libmachine: () Calling .GetMachineName
I0316 17:05:34.661578  796618 main.go:141] libmachine: (functional-344728) Calling .DriverName
I0316 17:05:34.661794  796618 ssh_runner.go:195] Run: systemctl --version
I0316 17:05:34.661831  796618 main.go:141] libmachine: (functional-344728) Calling .GetSSHHostname
I0316 17:05:34.665006  796618 main.go:141] libmachine: (functional-344728) DBG | domain functional-344728 has defined MAC address 52:54:00:c0:72:76 in network mk-functional-344728
I0316 17:05:34.666987  796618 main.go:141] libmachine: (functional-344728) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:72:76", ip: ""} in network mk-functional-344728: {Iface:virbr1 ExpiryTime:2024-03-16 18:02:19 +0000 UTC Type:0 Mac:52:54:00:c0:72:76 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:functional-344728 Clientid:01:52:54:00:c0:72:76}
I0316 17:05:34.667027  796618 main.go:141] libmachine: (functional-344728) DBG | domain functional-344728 has defined IP address 192.168.39.188 and MAC address 52:54:00:c0:72:76 in network mk-functional-344728
I0316 17:05:34.667182  796618 main.go:141] libmachine: (functional-344728) Calling .GetSSHPort
I0316 17:05:34.667375  796618 main.go:141] libmachine: (functional-344728) Calling .GetSSHKeyPath
I0316 17:05:34.667575  796618 main.go:141] libmachine: (functional-344728) Calling .GetSSHUsername
I0316 17:05:34.671586  796618 sshutil.go:53] new ssh client: &{IP:192.168.39.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18277-781196/.minikube/machines/functional-344728/id_rsa Username:docker}
I0316 17:05:34.772731  796618 ssh_runner.go:195] Run: sudo crictl images --output json
I0316 17:05:34.877133  796618 main.go:141] libmachine: Making call to close driver server
I0316 17:05:34.877157  796618 main.go:141] libmachine: (functional-344728) Calling .Close
I0316 17:05:34.877518  796618 main.go:141] libmachine: Successfully made call to close driver server
I0316 17:05:34.877540  796618 main.go:141] libmachine: (functional-344728) DBG | Closing plugin on server side
I0316 17:05:34.877551  796618 main.go:141] libmachine: Making call to close connection to plugin binary
I0316 17:05:34.877567  796618 main.go:141] libmachine: Making call to close driver server
I0316 17:05:34.877575  796618 main.go:141] libmachine: (functional-344728) Calling .Close
I0316 17:05:34.877887  796618 main.go:141] libmachine: Successfully made call to close driver server
I0316 17:05:34.877902  796618 main.go:141] libmachine: (functional-344728) DBG | Closing plugin on server side
I0316 17:05:34.877908  796618 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-344728 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-344728 image ls --format json --alsologtostderr:
[{"id":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"9058936"},{"id":"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"16190758"},{"id":"sha256:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","repoDigests":["registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"102894559"},{"id":"sha256:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9
e99447d1c"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.4"],"size":"33420443"},{"id":"sha256:2f49ce487b91ea07e678a91be9f0bae25f238fb9b6f074a8003a6a58ebb2eadc","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-344728"],"size":"1006"},{"id":"sha256:ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-344728"],"size":"10823156"},{"id":"sha256:30a85937cd455759bf08fb46f911d5075356da87cd0717f858472aca65c906f5","repoDigests":[],"repoTags":["localhost/my-image:functional-344728"],"size":"774889"},{"id":"sha256:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257","repoDigests":["registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.4"],"size":"34683820"},{"id":"sha256:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e","repoDigests":["registry.k8s.io/k
ube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"],"repoTags":["registry.k8s.io/kube-proxy:v1.28.4"],"size":"24581402"},{"id":"sha256:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1","repoDigests":["registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba"],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.4"],"size":"18834488"},{"id":"sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"315399"},{"id":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"321520"},{"id":"sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"72306"},{"id":"sha256:c7d1297425461d3e24fe0
ba658818593be65d13a2dd45a4c02d8768d6c8c18cc","repoDigests":["docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052"],"repoTags":["docker.io/kindest/kindnetd:v20230809-80a64d96"],"size":"27737299"},{"id":"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"2395207"},{"id":"sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"297686"},{"id":"sha256:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb"],"repoTags":["docker.io/library/mysql:5.7"],"size":"137909886"},{"id":"sha256:92b11f67642b62bbb98e7e49169c346b30e20cd3c1c034d31087e46924b9312e","repoDigests":["do
cker.io/library/nginx@sha256:6db391d1c0cfb30588ba0bf72ea999404f2764febf0f1f196acd5867ac7efa7e"],"repoTags":["docker.io/library/nginx:latest"],"size":"70534964"},{"id":"sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"46237695"},{"id":"sha256:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"75788960"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-344728 image ls --format json --alsologtostderr:
I0316 17:05:34.324347  796594 out.go:291] Setting OutFile to fd 1 ...
I0316 17:05:34.324481  796594 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0316 17:05:34.324510  796594 out.go:304] Setting ErrFile to fd 2...
I0316 17:05:34.324516  796594 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0316 17:05:34.324721  796594 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18277-781196/.minikube/bin
I0316 17:05:34.325325  796594 config.go:182] Loaded profile config "functional-344728": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0316 17:05:34.325440  796594 config.go:182] Loaded profile config "functional-344728": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0316 17:05:34.325872  796594 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0316 17:05:34.325936  796594 main.go:141] libmachine: Launching plugin server for driver kvm2
I0316 17:05:34.341818  796594 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44195
I0316 17:05:34.342414  796594 main.go:141] libmachine: () Calling .GetVersion
I0316 17:05:34.343083  796594 main.go:141] libmachine: Using API Version  1
I0316 17:05:34.343114  796594 main.go:141] libmachine: () Calling .SetConfigRaw
I0316 17:05:34.343482  796594 main.go:141] libmachine: () Calling .GetMachineName
I0316 17:05:34.343707  796594 main.go:141] libmachine: (functional-344728) Calling .GetState
I0316 17:05:34.345659  796594 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0316 17:05:34.345720  796594 main.go:141] libmachine: Launching plugin server for driver kvm2
I0316 17:05:34.362027  796594 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39913
I0316 17:05:34.362602  796594 main.go:141] libmachine: () Calling .GetVersion
I0316 17:05:34.363136  796594 main.go:141] libmachine: Using API Version  1
I0316 17:05:34.363164  796594 main.go:141] libmachine: () Calling .SetConfigRaw
I0316 17:05:34.363614  796594 main.go:141] libmachine: () Calling .GetMachineName
I0316 17:05:34.363803  796594 main.go:141] libmachine: (functional-344728) Calling .DriverName
I0316 17:05:34.364051  796594 ssh_runner.go:195] Run: systemctl --version
I0316 17:05:34.364079  796594 main.go:141] libmachine: (functional-344728) Calling .GetSSHHostname
I0316 17:05:34.367014  796594 main.go:141] libmachine: (functional-344728) DBG | domain functional-344728 has defined MAC address 52:54:00:c0:72:76 in network mk-functional-344728
I0316 17:05:34.367396  796594 main.go:141] libmachine: (functional-344728) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:72:76", ip: ""} in network mk-functional-344728: {Iface:virbr1 ExpiryTime:2024-03-16 18:02:19 +0000 UTC Type:0 Mac:52:54:00:c0:72:76 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:functional-344728 Clientid:01:52:54:00:c0:72:76}
I0316 17:05:34.367425  796594 main.go:141] libmachine: (functional-344728) DBG | domain functional-344728 has defined IP address 192.168.39.188 and MAC address 52:54:00:c0:72:76 in network mk-functional-344728
I0316 17:05:34.367583  796594 main.go:141] libmachine: (functional-344728) Calling .GetSSHPort
I0316 17:05:34.367774  796594 main.go:141] libmachine: (functional-344728) Calling .GetSSHKeyPath
I0316 17:05:34.367941  796594 main.go:141] libmachine: (functional-344728) Calling .GetSSHUsername
I0316 17:05:34.368090  796594 sshutil.go:53] new ssh client: &{IP:192.168.39.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18277-781196/.minikube/machines/functional-344728/id_rsa Username:docker}
I0316 17:05:34.451702  796594 ssh_runner.go:195] Run: sudo crictl images --output json
I0316 17:05:34.537068  796594 main.go:141] libmachine: Making call to close driver server
I0316 17:05:34.537088  796594 main.go:141] libmachine: (functional-344728) Calling .Close
I0316 17:05:34.537464  796594 main.go:141] libmachine: Successfully made call to close driver server
I0316 17:05:34.537491  796594 main.go:141] libmachine: Making call to close connection to plugin binary
I0316 17:05:34.537501  796594 main.go:141] libmachine: Making call to close driver server
I0316 17:05:34.537511  796594 main.go:141] libmachine: (functional-344728) Calling .Close
I0316 17:05:34.537766  796594 main.go:141] libmachine: Successfully made call to close driver server
I0316 17:05:34.537807  796594 main.go:141] libmachine: (functional-344728) DBG | Closing plugin on server side
I0316 17:05:34.537814  796594 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-344728 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-344728 image ls --format yaml --alsologtostderr:
- id: sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "46237695"
- id: sha256:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.4
size: "33420443"
- id: sha256:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e
repoDigests:
- registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532
repoTags:
- registry.k8s.io/kube-proxy:v1.28.4
size: "24581402"
- id: sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "297686"
- id: sha256:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
repoTags:
- docker.io/library/mysql:5.7
size: "137909886"
- id: sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "2395207"
- id: sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "16190758"
- id: sha256:ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-344728
size: "10823156"
- id: sha256:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9
repoDigests:
- registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "102894559"
- id: sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
repoTags:
- registry.k8s.io/pause:3.9
size: "321520"
- id: sha256:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc
repoDigests:
- docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052
repoTags:
- docker.io/kindest/kindnetd:v20230809-80a64d96
size: "27737299"
- id: sha256:2f49ce487b91ea07e678a91be9f0bae25f238fb9b6f074a8003a6a58ebb2eadc
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-344728
size: "1006"
- id: sha256:92b11f67642b62bbb98e7e49169c346b30e20cd3c1c034d31087e46924b9312e
repoDigests:
- docker.io/library/nginx@sha256:6db391d1c0cfb30588ba0bf72ea999404f2764febf0f1f196acd5867ac7efa7e
repoTags:
- docker.io/library/nginx:latest
size: "70534964"
- id: sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "315399"
- id: sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "72306"
- id: sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "9058936"
- id: sha256:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.4
size: "34683820"
- id: sha256:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.4
size: "18834488"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-344728 image ls --format yaml --alsologtostderr:
I0316 17:05:29.160227  796463 out.go:291] Setting OutFile to fd 1 ...
I0316 17:05:29.160519  796463 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0316 17:05:29.160530  796463 out.go:304] Setting ErrFile to fd 2...
I0316 17:05:29.160534  796463 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0316 17:05:29.160801  796463 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18277-781196/.minikube/bin
I0316 17:05:29.161484  796463 config.go:182] Loaded profile config "functional-344728": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0316 17:05:29.161624  796463 config.go:182] Loaded profile config "functional-344728": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0316 17:05:29.162056  796463 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0316 17:05:29.162113  796463 main.go:141] libmachine: Launching plugin server for driver kvm2
I0316 17:05:29.178801  796463 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39335
I0316 17:05:29.179379  796463 main.go:141] libmachine: () Calling .GetVersion
I0316 17:05:29.180164  796463 main.go:141] libmachine: Using API Version  1
I0316 17:05:29.180205  796463 main.go:141] libmachine: () Calling .SetConfigRaw
I0316 17:05:29.180658  796463 main.go:141] libmachine: () Calling .GetMachineName
I0316 17:05:29.180923  796463 main.go:141] libmachine: (functional-344728) Calling .GetState
I0316 17:05:29.183416  796463 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0316 17:05:29.183525  796463 main.go:141] libmachine: Launching plugin server for driver kvm2
I0316 17:05:29.200365  796463 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38243
I0316 17:05:29.201015  796463 main.go:141] libmachine: () Calling .GetVersion
I0316 17:05:29.201712  796463 main.go:141] libmachine: Using API Version  1
I0316 17:05:29.201795  796463 main.go:141] libmachine: () Calling .SetConfigRaw
I0316 17:05:29.202158  796463 main.go:141] libmachine: () Calling .GetMachineName
I0316 17:05:29.202410  796463 main.go:141] libmachine: (functional-344728) Calling .DriverName
I0316 17:05:29.202670  796463 ssh_runner.go:195] Run: systemctl --version
I0316 17:05:29.202707  796463 main.go:141] libmachine: (functional-344728) Calling .GetSSHHostname
I0316 17:05:29.206041  796463 main.go:141] libmachine: (functional-344728) DBG | domain functional-344728 has defined MAC address 52:54:00:c0:72:76 in network mk-functional-344728
I0316 17:05:29.206513  796463 main.go:141] libmachine: (functional-344728) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:72:76", ip: ""} in network mk-functional-344728: {Iface:virbr1 ExpiryTime:2024-03-16 18:02:19 +0000 UTC Type:0 Mac:52:54:00:c0:72:76 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:functional-344728 Clientid:01:52:54:00:c0:72:76}
I0316 17:05:29.206551  796463 main.go:141] libmachine: (functional-344728) DBG | domain functional-344728 has defined IP address 192.168.39.188 and MAC address 52:54:00:c0:72:76 in network mk-functional-344728
I0316 17:05:29.206694  796463 main.go:141] libmachine: (functional-344728) Calling .GetSSHPort
I0316 17:05:29.206933  796463 main.go:141] libmachine: (functional-344728) Calling .GetSSHKeyPath
I0316 17:05:29.207129  796463 main.go:141] libmachine: (functional-344728) Calling .GetSSHUsername
I0316 17:05:29.207291  796463 sshutil.go:53] new ssh client: &{IP:192.168.39.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18277-781196/.minikube/machines/functional-344728/id_rsa Username:docker}
I0316 17:05:29.306907  796463 ssh_runner.go:195] Run: sudo crictl images --output json
I0316 17:05:29.410817  796463 main.go:141] libmachine: Making call to close driver server
I0316 17:05:29.410850  796463 main.go:141] libmachine: (functional-344728) Calling .Close
I0316 17:05:29.411198  796463 main.go:141] libmachine: Successfully made call to close driver server
I0316 17:05:29.411221  796463 main.go:141] libmachine: Making call to close connection to plugin binary
I0316 17:05:29.411229  796463 main.go:141] libmachine: (functional-344728) DBG | Closing plugin on server side
I0316 17:05:29.411241  796463 main.go:141] libmachine: Making call to close driver server
I0316 17:05:29.411309  796463 main.go:141] libmachine: (functional-344728) Calling .Close
I0316 17:05:29.411614  796463 main.go:141] libmachine: Successfully made call to close driver server
I0316 17:05:29.411634  796463 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-344728 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-344728 ssh pgrep buildkitd: exit status 1 (233.45702ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-344728 image build -t localhost/my-image:functional-344728 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-344728 image build -t localhost/my-image:functional-344728 testdata/build --alsologtostderr: (4.326716539s)
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-344728 image build -t localhost/my-image:functional-344728 testdata/build --alsologtostderr:
I0316 17:05:29.715092  796527 out.go:291] Setting OutFile to fd 1 ...
I0316 17:05:29.715259  796527 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0316 17:05:29.715276  796527 out.go:304] Setting ErrFile to fd 2...
I0316 17:05:29.715284  796527 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0316 17:05:29.715590  796527 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18277-781196/.minikube/bin
I0316 17:05:29.716238  796527 config.go:182] Loaded profile config "functional-344728": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0316 17:05:29.716997  796527 config.go:182] Loaded profile config "functional-344728": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
I0316 17:05:29.717419  796527 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0316 17:05:29.717467  796527 main.go:141] libmachine: Launching plugin server for driver kvm2
I0316 17:05:29.734300  796527 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39101
I0316 17:05:29.734842  796527 main.go:141] libmachine: () Calling .GetVersion
I0316 17:05:29.735564  796527 main.go:141] libmachine: Using API Version  1
I0316 17:05:29.735604  796527 main.go:141] libmachine: () Calling .SetConfigRaw
I0316 17:05:29.736068  796527 main.go:141] libmachine: () Calling .GetMachineName
I0316 17:05:29.736334  796527 main.go:141] libmachine: (functional-344728) Calling .GetState
I0316 17:05:29.738433  796527 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
I0316 17:05:29.738487  796527 main.go:141] libmachine: Launching plugin server for driver kvm2
I0316 17:05:29.757096  796527 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44563
I0316 17:05:29.757686  796527 main.go:141] libmachine: () Calling .GetVersion
I0316 17:05:29.758312  796527 main.go:141] libmachine: Using API Version  1
I0316 17:05:29.758338  796527 main.go:141] libmachine: () Calling .SetConfigRaw
I0316 17:05:29.758668  796527 main.go:141] libmachine: () Calling .GetMachineName
I0316 17:05:29.758880  796527 main.go:141] libmachine: (functional-344728) Calling .DriverName
I0316 17:05:29.759175  796527 ssh_runner.go:195] Run: systemctl --version
I0316 17:05:29.759209  796527 main.go:141] libmachine: (functional-344728) Calling .GetSSHHostname
I0316 17:05:29.762736  796527 main.go:141] libmachine: (functional-344728) DBG | domain functional-344728 has defined MAC address 52:54:00:c0:72:76 in network mk-functional-344728
I0316 17:05:29.763200  796527 main.go:141] libmachine: (functional-344728) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:72:76", ip: ""} in network mk-functional-344728: {Iface:virbr1 ExpiryTime:2024-03-16 18:02:19 +0000 UTC Type:0 Mac:52:54:00:c0:72:76 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:functional-344728 Clientid:01:52:54:00:c0:72:76}
I0316 17:05:29.763278  796527 main.go:141] libmachine: (functional-344728) DBG | domain functional-344728 has defined IP address 192.168.39.188 and MAC address 52:54:00:c0:72:76 in network mk-functional-344728
I0316 17:05:29.763355  796527 main.go:141] libmachine: (functional-344728) Calling .GetSSHPort
I0316 17:05:29.763670  796527 main.go:141] libmachine: (functional-344728) Calling .GetSSHKeyPath
I0316 17:05:29.763886  796527 main.go:141] libmachine: (functional-344728) Calling .GetSSHUsername
I0316 17:05:29.764068  796527 sshutil.go:53] new ssh client: &{IP:192.168.39.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18277-781196/.minikube/machines/functional-344728/id_rsa Username:docker}
I0316 17:05:29.865731  796527 build_images.go:161] Building image from path: /tmp/build.3697803289.tar
I0316 17:05:29.865823  796527 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0316 17:05:29.887065  796527 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3697803289.tar
I0316 17:05:29.895539  796527 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3697803289.tar: stat -c "%s %y" /var/lib/minikube/build/build.3697803289.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3697803289.tar': No such file or directory
I0316 17:05:29.895592  796527 ssh_runner.go:362] scp /tmp/build.3697803289.tar --> /var/lib/minikube/build/build.3697803289.tar (3072 bytes)
I0316 17:05:29.945057  796527 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3697803289
I0316 17:05:29.963450  796527 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3697803289 -xf /var/lib/minikube/build/build.3697803289.tar
I0316 17:05:29.986686  796527 containerd.go:379] Building image: /var/lib/minikube/build/build.3697803289
I0316 17:05:29.986802  796527 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.3697803289 --local dockerfile=/var/lib/minikube/build/build.3697803289 --output type=image,name=localhost/my-image:functional-344728
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.1s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.4s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.1s done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.1s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.4s done
#5 DONE 0.6s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.8s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers
#8 exporting layers 0.3s done
#8 exporting manifest sha256:ffa5f2b1758ec105cd9f0d71c508f0b51aef565b65c304ddc8a8182ba57813ce 0.0s done
#8 exporting config sha256:30a85937cd455759bf08fb46f911d5075356da87cd0717f858472aca65c906f5 0.0s done
#8 naming to localhost/my-image:functional-344728 done
#8 DONE 0.3s
I0316 17:05:33.927693  796527 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.3697803289 --local dockerfile=/var/lib/minikube/build/build.3697803289 --output type=image,name=localhost/my-image:functional-344728: (3.940842112s)
I0316 17:05:33.927823  796527 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3697803289
I0316 17:05:33.946505  796527 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3697803289.tar
I0316 17:05:33.971066  796527 build_images.go:217] Built localhost/my-image:functional-344728 from /tmp/build.3697803289.tar
I0316 17:05:33.971109  796527 build_images.go:133] succeeded building to: functional-344728
I0316 17:05:33.971116  796527 build_images.go:134] failed building to: 
I0316 17:05:33.971144  796527 main.go:141] libmachine: Making call to close driver server
I0316 17:05:33.971160  796527 main.go:141] libmachine: (functional-344728) Calling .Close
I0316 17:05:33.971489  796527 main.go:141] libmachine: Successfully made call to close driver server
I0316 17:05:33.971511  796527 main.go:141] libmachine: Making call to close connection to plugin binary
I0316 17:05:33.971522  796527 main.go:141] libmachine: Making call to close driver server
I0316 17:05:33.971530  796527 main.go:141] libmachine: (functional-344728) Calling .Close
I0316 17:05:33.971551  796527 main.go:141] libmachine: (functional-344728) DBG | Closing plugin on server side
I0316 17:05:33.971867  796527 main.go:141] libmachine: Successfully made call to close driver server
I0316 17:05:33.971896  796527 main.go:141] libmachine: Making call to close connection to plugin binary
I0316 17:05:33.971967  796527 main.go:141] libmachine: (functional-344728) DBG | Closing plugin on server side
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-344728 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.84s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-344728
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.97s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-344728 image load --daemon gcr.io/google-containers/addon-resizer:functional-344728 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-344728 image load --daemon gcr.io/google-containers/addon-resizer:functional-344728 --alsologtostderr: (4.845121369s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-344728 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-344728 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-344728 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-344728 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1311: Took "301.017409ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1325: Took "67.507114ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1362: Took "315.259243ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1375: Took "66.781098ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (6.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-344728 /tmp/TestFunctionalparallelMountCmdany-port1064063795/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1710608701764602289" to /tmp/TestFunctionalparallelMountCmdany-port1064063795/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1710608701764602289" to /tmp/TestFunctionalparallelMountCmdany-port1064063795/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1710608701764602289" to /tmp/TestFunctionalparallelMountCmdany-port1064063795/001/test-1710608701764602289
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-344728 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-344728 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (273.771828ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-344728 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-344728 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Mar 16 17:05 created-by-test
-rw-r--r-- 1 docker docker 24 Mar 16 17:05 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Mar 16 17:05 test-1710608701764602289
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-344728 ssh cat /mount-9p/test-1710608701764602289
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-344728 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [0ea5e622-3a3f-4a1b-aa7a-f2bee523788e] Pending
helpers_test.go:344: "busybox-mount" [0ea5e622-3a3f-4a1b-aa7a-f2bee523788e] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [0ea5e622-3a3f-4a1b-aa7a-f2bee523788e] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [0ea5e622-3a3f-4a1b-aa7a-f2bee523788e] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.006836187s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-344728 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-344728 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-344728 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-344728 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-344728 /tmp/TestFunctionalparallelMountCmdany-port1064063795/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (6.59s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-344728 image load --daemon gcr.io/google-containers/addon-resizer:functional-344728 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-344728 image load --daemon gcr.io/google-containers/addon-resizer:functional-344728 --alsologtostderr: (2.921624234s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-344728 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-344728
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-344728 image load --daemon gcr.io/google-containers/addon-resizer:functional-344728 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-344728 image load --daemon gcr.io/google-containers/addon-resizer:functional-344728 --alsologtostderr: (5.481313422s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-344728 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.62s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-amd64 -p functional-344728 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-amd64 -p functional-344728 service list -o json
functional_test.go:1490: Took "350.120186ms" to run "out/minikube-linux-amd64 -p functional-344728 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-344728 /tmp/TestFunctionalparallelMountCmdspecific-port3253898360/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-344728 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-344728 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (307.727826ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-344728 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-344728 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-344728 /tmp/TestFunctionalparallelMountCmdspecific-port3253898360/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-344728 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-344728 ssh "sudo umount -f /mount-9p": exit status 1 (300.31225ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-344728 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-344728 /tmp/TestFunctionalparallelMountCmdspecific-port3253898360/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.23s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-amd64 -p functional-344728 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.39.188:32430
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-amd64 -p functional-344728 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-amd64 -p functional-344728 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.39.188:32430
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-344728 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2171027419/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-344728 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2171027419/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-344728 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2171027419/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-344728 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-344728 ssh "findmnt -T" /mount1: exit status 1 (423.943193ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-344728 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-344728 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-344728 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-344728 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-344728 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2171027419/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-344728 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2171027419/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-344728 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2171027419/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.71s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-344728 image save gcr.io/google-containers/addon-resizer:functional-344728 /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-344728 image save gcr.io/google-containers/addon-resizer:functional-344728 /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar --alsologtostderr: (1.551729265s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-344728 image rm gcr.io/google-containers/addon-resizer:functional-344728 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-344728 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (3.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-344728 image load /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-344728 image load /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar --alsologtostderr: (3.038402088s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-344728 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (3.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-344728
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-344728 image save --daemon gcr.io/google-containers/addon-resizer:functional-344728 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-amd64 -p functional-344728 image save --daemon gcr.io/google-containers/addon-resizer:functional-344728 --alsologtostderr: (1.280914149s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-344728
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.32s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-344728
--- PASS: TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-344728
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-344728
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (217.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-960413 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E0316 17:05:47.595576  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/addons-867363/client.crt: no such file or directory
E0316 17:08:03.750690  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/addons-867363/client.crt: no such file or directory
E0316 17:08:31.436298  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/addons-867363/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-960413 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (3m37.107760641s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-960413 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (217.89s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (8.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-960413 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-960413 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-960413 -- rollout status deployment/busybox: (5.800086567s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-960413 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-960413 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-960413 -- exec busybox-5b5d89c9d6-b8nhb -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-960413 -- exec busybox-5b5d89c9d6-drp8z -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-960413 -- exec busybox-5b5d89c9d6-v767s -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-960413 -- exec busybox-5b5d89c9d6-b8nhb -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-960413 -- exec busybox-5b5d89c9d6-drp8z -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-960413 -- exec busybox-5b5d89c9d6-v767s -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-960413 -- exec busybox-5b5d89c9d6-b8nhb -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-960413 -- exec busybox-5b5d89c9d6-drp8z -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-960413 -- exec busybox-5b5d89c9d6-v767s -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (8.59s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-960413 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-960413 -- exec busybox-5b5d89c9d6-b8nhb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-960413 -- exec busybox-5b5d89c9d6-b8nhb -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-960413 -- exec busybox-5b5d89c9d6-drp8z -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-960413 -- exec busybox-5b5d89c9d6-drp8z -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-960413 -- exec busybox-5b5d89c9d6-v767s -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-960413 -- exec busybox-5b5d89c9d6-v767s -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (48.17s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-960413 -v=7 --alsologtostderr
E0316 17:09:56.778929  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/functional-344728/client.crt: no such file or directory
E0316 17:09:56.784300  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/functional-344728/client.crt: no such file or directory
E0316 17:09:56.794668  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/functional-344728/client.crt: no such file or directory
E0316 17:09:56.815023  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/functional-344728/client.crt: no such file or directory
E0316 17:09:56.855409  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/functional-344728/client.crt: no such file or directory
E0316 17:09:56.936449  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/functional-344728/client.crt: no such file or directory
E0316 17:09:57.097224  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/functional-344728/client.crt: no such file or directory
E0316 17:09:57.417844  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/functional-344728/client.crt: no such file or directory
E0316 17:09:58.058907  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/functional-344728/client.crt: no such file or directory
E0316 17:09:59.339600  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/functional-344728/client.crt: no such file or directory
E0316 17:10:01.899908  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/functional-344728/client.crt: no such file or directory
E0316 17:10:07.020374  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/functional-344728/client.crt: no such file or directory
E0316 17:10:17.261095  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/functional-344728/client.crt: no such file or directory
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-960413 -v=7 --alsologtostderr: (47.190765174s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-960413 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (48.17s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-960413 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.61s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (15.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-960413 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-960413 cp testdata/cp-test.txt ha-960413:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-960413 ssh -n ha-960413 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-960413 cp ha-960413:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile533487574/001/cp-test_ha-960413.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-960413 ssh -n ha-960413 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-960413 cp ha-960413:/home/docker/cp-test.txt ha-960413-m02:/home/docker/cp-test_ha-960413_ha-960413-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-960413 ssh -n ha-960413 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-960413 ssh -n ha-960413-m02 "sudo cat /home/docker/cp-test_ha-960413_ha-960413-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-960413 cp ha-960413:/home/docker/cp-test.txt ha-960413-m03:/home/docker/cp-test_ha-960413_ha-960413-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-960413 ssh -n ha-960413 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-960413 ssh -n ha-960413-m03 "sudo cat /home/docker/cp-test_ha-960413_ha-960413-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-960413 cp ha-960413:/home/docker/cp-test.txt ha-960413-m04:/home/docker/cp-test_ha-960413_ha-960413-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-960413 ssh -n ha-960413 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-960413 ssh -n ha-960413-m04 "sudo cat /home/docker/cp-test_ha-960413_ha-960413-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-960413 cp testdata/cp-test.txt ha-960413-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-960413 ssh -n ha-960413-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-960413 cp ha-960413-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile533487574/001/cp-test_ha-960413-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-960413 ssh -n ha-960413-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-960413 cp ha-960413-m02:/home/docker/cp-test.txt ha-960413:/home/docker/cp-test_ha-960413-m02_ha-960413.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-960413 ssh -n ha-960413-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-960413 ssh -n ha-960413 "sudo cat /home/docker/cp-test_ha-960413-m02_ha-960413.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-960413 cp ha-960413-m02:/home/docker/cp-test.txt ha-960413-m03:/home/docker/cp-test_ha-960413-m02_ha-960413-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-960413 ssh -n ha-960413-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-960413 ssh -n ha-960413-m03 "sudo cat /home/docker/cp-test_ha-960413-m02_ha-960413-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-960413 cp ha-960413-m02:/home/docker/cp-test.txt ha-960413-m04:/home/docker/cp-test_ha-960413-m02_ha-960413-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-960413 ssh -n ha-960413-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-960413 ssh -n ha-960413-m04 "sudo cat /home/docker/cp-test_ha-960413-m02_ha-960413-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-960413 cp testdata/cp-test.txt ha-960413-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-960413 ssh -n ha-960413-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-960413 cp ha-960413-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile533487574/001/cp-test_ha-960413-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-960413 ssh -n ha-960413-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-960413 cp ha-960413-m03:/home/docker/cp-test.txt ha-960413:/home/docker/cp-test_ha-960413-m03_ha-960413.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-960413 ssh -n ha-960413-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-960413 ssh -n ha-960413 "sudo cat /home/docker/cp-test_ha-960413-m03_ha-960413.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-960413 cp ha-960413-m03:/home/docker/cp-test.txt ha-960413-m02:/home/docker/cp-test_ha-960413-m03_ha-960413-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-960413 ssh -n ha-960413-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-960413 ssh -n ha-960413-m02 "sudo cat /home/docker/cp-test_ha-960413-m03_ha-960413-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-960413 cp ha-960413-m03:/home/docker/cp-test.txt ha-960413-m04:/home/docker/cp-test_ha-960413-m03_ha-960413-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-960413 ssh -n ha-960413-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-960413 ssh -n ha-960413-m04 "sudo cat /home/docker/cp-test_ha-960413-m03_ha-960413-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-960413 cp testdata/cp-test.txt ha-960413-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-960413 ssh -n ha-960413-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-960413 cp ha-960413-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile533487574/001/cp-test_ha-960413-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-960413 ssh -n ha-960413-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-960413 cp ha-960413-m04:/home/docker/cp-test.txt ha-960413:/home/docker/cp-test_ha-960413-m04_ha-960413.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-960413 ssh -n ha-960413-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-960413 ssh -n ha-960413 "sudo cat /home/docker/cp-test_ha-960413-m04_ha-960413.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-960413 cp ha-960413-m04:/home/docker/cp-test.txt ha-960413-m02:/home/docker/cp-test_ha-960413-m04_ha-960413-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-960413 ssh -n ha-960413-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-960413 ssh -n ha-960413-m02 "sudo cat /home/docker/cp-test_ha-960413-m04_ha-960413-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-960413 cp ha-960413-m04:/home/docker/cp-test.txt ha-960413-m03:/home/docker/cp-test_ha-960413-m04_ha-960413-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-960413 ssh -n ha-960413-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-960413 ssh -n ha-960413-m03 "sudo cat /home/docker/cp-test_ha-960413-m04_ha-960413-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (15.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (93.27s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-960413 node stop m02 -v=7 --alsologtostderr
E0316 17:10:37.742049  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/functional-344728/client.crt: no such file or directory
E0316 17:11:18.703188  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/functional-344728/client.crt: no such file or directory
ha_test.go:363: (dbg) Done: out/minikube-linux-amd64 -p ha-960413 node stop m02 -v=7 --alsologtostderr: (1m32.501714764s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-960413 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-960413 status -v=7 --alsologtostderr: exit status 7 (768.658018ms)

                                                
                                                
-- stdout --
	ha-960413
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-960413-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-960413-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-960413-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0316 17:12:07.655239  800789 out.go:291] Setting OutFile to fd 1 ...
	I0316 17:12:07.655467  800789 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0316 17:12:07.655479  800789 out.go:304] Setting ErrFile to fd 2...
	I0316 17:12:07.655485  800789 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0316 17:12:07.655724  800789 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18277-781196/.minikube/bin
	I0316 17:12:07.655937  800789 out.go:298] Setting JSON to false
	I0316 17:12:07.655984  800789 mustload.go:65] Loading cluster: ha-960413
	I0316 17:12:07.656105  800789 notify.go:220] Checking for updates...
	I0316 17:12:07.656431  800789 config.go:182] Loaded profile config "ha-960413": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0316 17:12:07.656452  800789 status.go:255] checking status of ha-960413 ...
	I0316 17:12:07.656914  800789 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0316 17:12:07.656991  800789 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 17:12:07.683812  800789 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42641
	I0316 17:12:07.684445  800789 main.go:141] libmachine: () Calling .GetVersion
	I0316 17:12:07.685130  800789 main.go:141] libmachine: Using API Version  1
	I0316 17:12:07.685157  800789 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 17:12:07.685622  800789 main.go:141] libmachine: () Calling .GetMachineName
	I0316 17:12:07.685855  800789 main.go:141] libmachine: (ha-960413) Calling .GetState
	I0316 17:12:07.687825  800789 status.go:330] ha-960413 host status = "Running" (err=<nil>)
	I0316 17:12:07.687847  800789 host.go:66] Checking if "ha-960413" exists ...
	I0316 17:12:07.688476  800789 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0316 17:12:07.688527  800789 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 17:12:07.706682  800789 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36407
	I0316 17:12:07.707177  800789 main.go:141] libmachine: () Calling .GetVersion
	I0316 17:12:07.707770  800789 main.go:141] libmachine: Using API Version  1
	I0316 17:12:07.707797  800789 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 17:12:07.708244  800789 main.go:141] libmachine: () Calling .GetMachineName
	I0316 17:12:07.708470  800789 main.go:141] libmachine: (ha-960413) Calling .GetIP
	I0316 17:12:07.712237  800789 main.go:141] libmachine: (ha-960413) DBG | domain ha-960413 has defined MAC address 52:54:00:c5:75:e0 in network mk-ha-960413
	I0316 17:12:07.712799  800789 main.go:141] libmachine: (ha-960413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:75:e0", ip: ""} in network mk-ha-960413: {Iface:virbr1 ExpiryTime:2024-03-16 18:05:59 +0000 UTC Type:0 Mac:52:54:00:c5:75:e0 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:ha-960413 Clientid:01:52:54:00:c5:75:e0}
	I0316 17:12:07.712862  800789 main.go:141] libmachine: (ha-960413) DBG | domain ha-960413 has defined IP address 192.168.39.216 and MAC address 52:54:00:c5:75:e0 in network mk-ha-960413
	I0316 17:12:07.713040  800789 host.go:66] Checking if "ha-960413" exists ...
	I0316 17:12:07.713364  800789 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0316 17:12:07.713421  800789 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 17:12:07.732385  800789 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35089
	I0316 17:12:07.732903  800789 main.go:141] libmachine: () Calling .GetVersion
	I0316 17:12:07.733504  800789 main.go:141] libmachine: Using API Version  1
	I0316 17:12:07.733532  800789 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 17:12:07.734043  800789 main.go:141] libmachine: () Calling .GetMachineName
	I0316 17:12:07.734260  800789 main.go:141] libmachine: (ha-960413) Calling .DriverName
	I0316 17:12:07.734506  800789 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0316 17:12:07.734535  800789 main.go:141] libmachine: (ha-960413) Calling .GetSSHHostname
	I0316 17:12:07.737916  800789 main.go:141] libmachine: (ha-960413) DBG | domain ha-960413 has defined MAC address 52:54:00:c5:75:e0 in network mk-ha-960413
	I0316 17:12:07.738383  800789 main.go:141] libmachine: (ha-960413) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c5:75:e0", ip: ""} in network mk-ha-960413: {Iface:virbr1 ExpiryTime:2024-03-16 18:05:59 +0000 UTC Type:0 Mac:52:54:00:c5:75:e0 Iaid: IPaddr:192.168.39.216 Prefix:24 Hostname:ha-960413 Clientid:01:52:54:00:c5:75:e0}
	I0316 17:12:07.738410  800789 main.go:141] libmachine: (ha-960413) DBG | domain ha-960413 has defined IP address 192.168.39.216 and MAC address 52:54:00:c5:75:e0 in network mk-ha-960413
	I0316 17:12:07.738679  800789 main.go:141] libmachine: (ha-960413) Calling .GetSSHPort
	I0316 17:12:07.738869  800789 main.go:141] libmachine: (ha-960413) Calling .GetSSHKeyPath
	I0316 17:12:07.738991  800789 main.go:141] libmachine: (ha-960413) Calling .GetSSHUsername
	I0316 17:12:07.739149  800789 sshutil.go:53] new ssh client: &{IP:192.168.39.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18277-781196/.minikube/machines/ha-960413/id_rsa Username:docker}
	I0316 17:12:07.827497  800789 ssh_runner.go:195] Run: systemctl --version
	I0316 17:12:07.839831  800789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0316 17:12:07.864984  800789 kubeconfig.go:125] found "ha-960413" server: "https://192.168.39.254:8443"
	I0316 17:12:07.865023  800789 api_server.go:166] Checking apiserver status ...
	I0316 17:12:07.865077  800789 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 17:12:07.888028  800789 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1178/cgroup
	W0316 17:12:07.904328  800789 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1178/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0316 17:12:07.904404  800789 ssh_runner.go:195] Run: ls
	I0316 17:12:07.910649  800789 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0316 17:12:07.916401  800789 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0316 17:12:07.916439  800789 status.go:422] ha-960413 apiserver status = Running (err=<nil>)
	I0316 17:12:07.916455  800789 status.go:257] ha-960413 status: &{Name:ha-960413 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0316 17:12:07.916509  800789 status.go:255] checking status of ha-960413-m02 ...
	I0316 17:12:07.916961  800789 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0316 17:12:07.916997  800789 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 17:12:07.933224  800789 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34023
	I0316 17:12:07.933803  800789 main.go:141] libmachine: () Calling .GetVersion
	I0316 17:12:07.934341  800789 main.go:141] libmachine: Using API Version  1
	I0316 17:12:07.934368  800789 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 17:12:07.934819  800789 main.go:141] libmachine: () Calling .GetMachineName
	I0316 17:12:07.935103  800789 main.go:141] libmachine: (ha-960413-m02) Calling .GetState
	I0316 17:12:07.937048  800789 status.go:330] ha-960413-m02 host status = "Stopped" (err=<nil>)
	I0316 17:12:07.937068  800789 status.go:343] host is not running, skipping remaining checks
	I0316 17:12:07.937075  800789 status.go:257] ha-960413-m02 status: &{Name:ha-960413-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0316 17:12:07.937110  800789 status.go:255] checking status of ha-960413-m03 ...
	I0316 17:12:07.937421  800789 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0316 17:12:07.937456  800789 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 17:12:07.955606  800789 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39409
	I0316 17:12:07.956137  800789 main.go:141] libmachine: () Calling .GetVersion
	I0316 17:12:07.956703  800789 main.go:141] libmachine: Using API Version  1
	I0316 17:12:07.956730  800789 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 17:12:07.957122  800789 main.go:141] libmachine: () Calling .GetMachineName
	I0316 17:12:07.957349  800789 main.go:141] libmachine: (ha-960413-m03) Calling .GetState
	I0316 17:12:07.959169  800789 status.go:330] ha-960413-m03 host status = "Running" (err=<nil>)
	I0316 17:12:07.959192  800789 host.go:66] Checking if "ha-960413-m03" exists ...
	I0316 17:12:07.959643  800789 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0316 17:12:07.959718  800789 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 17:12:07.976633  800789 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39099
	I0316 17:12:07.977100  800789 main.go:141] libmachine: () Calling .GetVersion
	I0316 17:12:07.977714  800789 main.go:141] libmachine: Using API Version  1
	I0316 17:12:07.977736  800789 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 17:12:07.978087  800789 main.go:141] libmachine: () Calling .GetMachineName
	I0316 17:12:07.978354  800789 main.go:141] libmachine: (ha-960413-m03) Calling .GetIP
	I0316 17:12:07.981551  800789 main.go:141] libmachine: (ha-960413-m03) DBG | domain ha-960413-m03 has defined MAC address 52:54:00:42:15:8f in network mk-ha-960413
	I0316 17:12:07.982100  800789 main.go:141] libmachine: (ha-960413-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:15:8f", ip: ""} in network mk-ha-960413: {Iface:virbr1 ExpiryTime:2024-03-16 18:08:22 +0000 UTC Type:0 Mac:52:54:00:42:15:8f Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-960413-m03 Clientid:01:52:54:00:42:15:8f}
	I0316 17:12:07.982129  800789 main.go:141] libmachine: (ha-960413-m03) DBG | domain ha-960413-m03 has defined IP address 192.168.39.48 and MAC address 52:54:00:42:15:8f in network mk-ha-960413
	I0316 17:12:07.982279  800789 host.go:66] Checking if "ha-960413-m03" exists ...
	I0316 17:12:07.982607  800789 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0316 17:12:07.982651  800789 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 17:12:07.999961  800789 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41929
	I0316 17:12:08.000676  800789 main.go:141] libmachine: () Calling .GetVersion
	I0316 17:12:08.001246  800789 main.go:141] libmachine: Using API Version  1
	I0316 17:12:08.001268  800789 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 17:12:08.001626  800789 main.go:141] libmachine: () Calling .GetMachineName
	I0316 17:12:08.001859  800789 main.go:141] libmachine: (ha-960413-m03) Calling .DriverName
	I0316 17:12:08.002102  800789 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0316 17:12:08.002134  800789 main.go:141] libmachine: (ha-960413-m03) Calling .GetSSHHostname
	I0316 17:12:08.005602  800789 main.go:141] libmachine: (ha-960413-m03) DBG | domain ha-960413-m03 has defined MAC address 52:54:00:42:15:8f in network mk-ha-960413
	I0316 17:12:08.006050  800789 main.go:141] libmachine: (ha-960413-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:42:15:8f", ip: ""} in network mk-ha-960413: {Iface:virbr1 ExpiryTime:2024-03-16 18:08:22 +0000 UTC Type:0 Mac:52:54:00:42:15:8f Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-960413-m03 Clientid:01:52:54:00:42:15:8f}
	I0316 17:12:08.006092  800789 main.go:141] libmachine: (ha-960413-m03) DBG | domain ha-960413-m03 has defined IP address 192.168.39.48 and MAC address 52:54:00:42:15:8f in network mk-ha-960413
	I0316 17:12:08.006214  800789 main.go:141] libmachine: (ha-960413-m03) Calling .GetSSHPort
	I0316 17:12:08.006435  800789 main.go:141] libmachine: (ha-960413-m03) Calling .GetSSHKeyPath
	I0316 17:12:08.006617  800789 main.go:141] libmachine: (ha-960413-m03) Calling .GetSSHUsername
	I0316 17:12:08.006788  800789 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18277-781196/.minikube/machines/ha-960413-m03/id_rsa Username:docker}
	I0316 17:12:08.094796  800789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0316 17:12:08.118634  800789 kubeconfig.go:125] found "ha-960413" server: "https://192.168.39.254:8443"
	I0316 17:12:08.118671  800789 api_server.go:166] Checking apiserver status ...
	I0316 17:12:08.118710  800789 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 17:12:08.142380  800789 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1189/cgroup
	W0316 17:12:08.158853  800789 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1189/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0316 17:12:08.158935  800789 ssh_runner.go:195] Run: ls
	I0316 17:12:08.164641  800789 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0316 17:12:08.169897  800789 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0316 17:12:08.169931  800789 status.go:422] ha-960413-m03 apiserver status = Running (err=<nil>)
	I0316 17:12:08.169941  800789 status.go:257] ha-960413-m03 status: &{Name:ha-960413-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0316 17:12:08.169961  800789 status.go:255] checking status of ha-960413-m04 ...
	I0316 17:12:08.170373  800789 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0316 17:12:08.170405  800789 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 17:12:08.190836  800789 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38191
	I0316 17:12:08.191306  800789 main.go:141] libmachine: () Calling .GetVersion
	I0316 17:12:08.191944  800789 main.go:141] libmachine: Using API Version  1
	I0316 17:12:08.191972  800789 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 17:12:08.192370  800789 main.go:141] libmachine: () Calling .GetMachineName
	I0316 17:12:08.192551  800789 main.go:141] libmachine: (ha-960413-m04) Calling .GetState
	I0316 17:12:08.194189  800789 status.go:330] ha-960413-m04 host status = "Running" (err=<nil>)
	I0316 17:12:08.194211  800789 host.go:66] Checking if "ha-960413-m04" exists ...
	I0316 17:12:08.194631  800789 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0316 17:12:08.194685  800789 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 17:12:08.212655  800789 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40669
	I0316 17:12:08.213154  800789 main.go:141] libmachine: () Calling .GetVersion
	I0316 17:12:08.213795  800789 main.go:141] libmachine: Using API Version  1
	I0316 17:12:08.213829  800789 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 17:12:08.214190  800789 main.go:141] libmachine: () Calling .GetMachineName
	I0316 17:12:08.214450  800789 main.go:141] libmachine: (ha-960413-m04) Calling .GetIP
	I0316 17:12:08.217539  800789 main.go:141] libmachine: (ha-960413-m04) DBG | domain ha-960413-m04 has defined MAC address 52:54:00:33:05:1f in network mk-ha-960413
	I0316 17:12:08.217999  800789 main.go:141] libmachine: (ha-960413-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:05:1f", ip: ""} in network mk-ha-960413: {Iface:virbr1 ExpiryTime:2024-03-16 18:09:48 +0000 UTC Type:0 Mac:52:54:00:33:05:1f Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:ha-960413-m04 Clientid:01:52:54:00:33:05:1f}
	I0316 17:12:08.218026  800789 main.go:141] libmachine: (ha-960413-m04) DBG | domain ha-960413-m04 has defined IP address 192.168.39.121 and MAC address 52:54:00:33:05:1f in network mk-ha-960413
	I0316 17:12:08.218247  800789 host.go:66] Checking if "ha-960413-m04" exists ...
	I0316 17:12:08.218689  800789 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0316 17:12:08.218732  800789 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 17:12:08.236333  800789 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33645
	I0316 17:12:08.236909  800789 main.go:141] libmachine: () Calling .GetVersion
	I0316 17:12:08.237572  800789 main.go:141] libmachine: Using API Version  1
	I0316 17:12:08.237600  800789 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 17:12:08.238055  800789 main.go:141] libmachine: () Calling .GetMachineName
	I0316 17:12:08.238277  800789 main.go:141] libmachine: (ha-960413-m04) Calling .DriverName
	I0316 17:12:08.238496  800789 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0316 17:12:08.238522  800789 main.go:141] libmachine: (ha-960413-m04) Calling .GetSSHHostname
	I0316 17:12:08.241729  800789 main.go:141] libmachine: (ha-960413-m04) DBG | domain ha-960413-m04 has defined MAC address 52:54:00:33:05:1f in network mk-ha-960413
	I0316 17:12:08.242360  800789 main.go:141] libmachine: (ha-960413-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:05:1f", ip: ""} in network mk-ha-960413: {Iface:virbr1 ExpiryTime:2024-03-16 18:09:48 +0000 UTC Type:0 Mac:52:54:00:33:05:1f Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:ha-960413-m04 Clientid:01:52:54:00:33:05:1f}
	I0316 17:12:08.242390  800789 main.go:141] libmachine: (ha-960413-m04) DBG | domain ha-960413-m04 has defined IP address 192.168.39.121 and MAC address 52:54:00:33:05:1f in network mk-ha-960413
	I0316 17:12:08.242561  800789 main.go:141] libmachine: (ha-960413-m04) Calling .GetSSHPort
	I0316 17:12:08.242764  800789 main.go:141] libmachine: (ha-960413-m04) Calling .GetSSHKeyPath
	I0316 17:12:08.242977  800789 main.go:141] libmachine: (ha-960413-m04) Calling .GetSSHUsername
	I0316 17:12:08.243124  800789 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18277-781196/.minikube/machines/ha-960413-m04/id_rsa Username:docker}
	I0316 17:12:08.335122  800789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0316 17:12:08.355889  800789 status.go:257] ha-960413-m04 status: &{Name:ha-960413-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (93.27s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.44s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.44s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (45.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-960413 node start m02 -v=7 --alsologtostderr
E0316 17:12:40.623874  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/functional-344728/client.crt: no such file or directory
ha_test.go:420: (dbg) Done: out/minikube-linux-amd64 -p ha-960413 node start m02 -v=7 --alsologtostderr: (44.344340227s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-960413 status -v=7 --alsologtostderr
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (45.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (492.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-960413 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-960413 -v=7 --alsologtostderr
E0316 17:13:03.750381  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/addons-867363/client.crt: no such file or directory
E0316 17:14:56.779187  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/functional-344728/client.crt: no such file or directory
E0316 17:15:24.464449  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/functional-344728/client.crt: no such file or directory
ha_test.go:462: (dbg) Done: out/minikube-linux-amd64 stop -p ha-960413 -v=7 --alsologtostderr: (4m39.573013668s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-960413 --wait=true -v=7 --alsologtostderr
E0316 17:18:03.750601  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/addons-867363/client.crt: no such file or directory
E0316 17:19:26.797210  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/addons-867363/client.crt: no such file or directory
E0316 17:19:56.779630  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/functional-344728/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-960413 --wait=true -v=7 --alsologtostderr: (3m33.21619209s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-960413
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (492.94s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (8.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-960413 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-960413 node delete m03 -v=7 --alsologtostderr: (7.869974599s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-960413 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (8.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.44s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.44s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (276.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-960413 stop -v=7 --alsologtostderr
E0316 17:23:03.750374  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/addons-867363/client.crt: no such file or directory
E0316 17:24:56.779687  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/functional-344728/client.crt: no such file or directory
ha_test.go:531: (dbg) Done: out/minikube-linux-amd64 -p ha-960413 stop -v=7 --alsologtostderr: (4m36.62176495s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-960413 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-960413 status -v=7 --alsologtostderr: exit status 7 (135.602532ms)

                                                
                                                
-- stdout --
	ha-960413
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-960413-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-960413-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0316 17:25:53.628417  804361 out.go:291] Setting OutFile to fd 1 ...
	I0316 17:25:53.628581  804361 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0316 17:25:53.628590  804361 out.go:304] Setting ErrFile to fd 2...
	I0316 17:25:53.628595  804361 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0316 17:25:53.628821  804361 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18277-781196/.minikube/bin
	I0316 17:25:53.629016  804361 out.go:298] Setting JSON to false
	I0316 17:25:53.629060  804361 mustload.go:65] Loading cluster: ha-960413
	I0316 17:25:53.629210  804361 notify.go:220] Checking for updates...
	I0316 17:25:53.629494  804361 config.go:182] Loaded profile config "ha-960413": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0316 17:25:53.629513  804361 status.go:255] checking status of ha-960413 ...
	I0316 17:25:53.629982  804361 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0316 17:25:53.630050  804361 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 17:25:53.653154  804361 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39747
	I0316 17:25:53.653675  804361 main.go:141] libmachine: () Calling .GetVersion
	I0316 17:25:53.654457  804361 main.go:141] libmachine: Using API Version  1
	I0316 17:25:53.654490  804361 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 17:25:53.654901  804361 main.go:141] libmachine: () Calling .GetMachineName
	I0316 17:25:53.655118  804361 main.go:141] libmachine: (ha-960413) Calling .GetState
	I0316 17:25:53.656776  804361 status.go:330] ha-960413 host status = "Stopped" (err=<nil>)
	I0316 17:25:53.656792  804361 status.go:343] host is not running, skipping remaining checks
	I0316 17:25:53.656798  804361 status.go:257] ha-960413 status: &{Name:ha-960413 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0316 17:25:53.656837  804361 status.go:255] checking status of ha-960413-m02 ...
	I0316 17:25:53.657142  804361 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0316 17:25:53.657194  804361 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 17:25:53.673551  804361 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42923
	I0316 17:25:53.674116  804361 main.go:141] libmachine: () Calling .GetVersion
	I0316 17:25:53.674694  804361 main.go:141] libmachine: Using API Version  1
	I0316 17:25:53.674723  804361 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 17:25:53.675124  804361 main.go:141] libmachine: () Calling .GetMachineName
	I0316 17:25:53.675386  804361 main.go:141] libmachine: (ha-960413-m02) Calling .GetState
	I0316 17:25:53.677218  804361 status.go:330] ha-960413-m02 host status = "Stopped" (err=<nil>)
	I0316 17:25:53.677240  804361 status.go:343] host is not running, skipping remaining checks
	I0316 17:25:53.677248  804361 status.go:257] ha-960413-m02 status: &{Name:ha-960413-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0316 17:25:53.677272  804361 status.go:255] checking status of ha-960413-m04 ...
	I0316 17:25:53.677558  804361 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0316 17:25:53.677641  804361 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 17:25:53.693811  804361 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35905
	I0316 17:25:53.694312  804361 main.go:141] libmachine: () Calling .GetVersion
	I0316 17:25:53.694806  804361 main.go:141] libmachine: Using API Version  1
	I0316 17:25:53.694831  804361 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 17:25:53.695187  804361 main.go:141] libmachine: () Calling .GetMachineName
	I0316 17:25:53.695391  804361 main.go:141] libmachine: (ha-960413-m04) Calling .GetState
	I0316 17:25:53.697268  804361 status.go:330] ha-960413-m04 host status = "Stopped" (err=<nil>)
	I0316 17:25:53.697296  804361 status.go:343] host is not running, skipping remaining checks
	I0316 17:25:53.697304  804361 status.go:257] ha-960413-m04 status: &{Name:ha-960413-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (276.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (167.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-960413 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E0316 17:26:19.824991  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/functional-344728/client.crt: no such file or directory
E0316 17:28:03.750815  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/addons-867363/client.crt: no such file or directory
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-960413 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (2m46.517235306s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-960413 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (167.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.43s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.43s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (78.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-960413 --control-plane -v=7 --alsologtostderr
E0316 17:29:56.779520  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/functional-344728/client.crt: no such file or directory
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-960413 --control-plane -v=7 --alsologtostderr: (1m17.95123476s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-960413 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (78.90s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.61s)

                                                
                                    
x
+
TestJSONOutput/start/Command (100.28s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-011053 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=containerd
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-011053 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=containerd: (1m40.284042871s)
--- PASS: TestJSONOutput/start/Command (100.28s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.85s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-011053 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.85s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.72s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-011053 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.72s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.39s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-011053 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-011053 --output=json --user=testUser: (7.389727166s)
--- PASS: TestJSONOutput/stop/Command (7.39s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.25s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-245873 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-245873 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (85.035351ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"2fdb2940-06d4-4852-ae08-8db89802b848","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-245873] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"f55c555f-ec4e-4628-bc53-21896c3b2f91","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18277"}}
	{"specversion":"1.0","id":"d6eb94f2-a790-4167-83e2-c572977a4200","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"335be2d2-b0aa-4cd4-993d-e81e3419a314","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/18277-781196/kubeconfig"}}
	{"specversion":"1.0","id":"09e31ca5-dc43-493a-b333-873653613bf4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/18277-781196/.minikube"}}
	{"specversion":"1.0","id":"36ccc143-68e2-41e0-8f1e-fc70112af0ab","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"70b45e79-312d-4821-a3c0-5122c1a291a7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"1c9ea55a-304c-4d1e-8704-ade25f7bfff6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-245873" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-245873
--- PASS: TestErrorJSONOutput (0.25s)

                                                
                                    
x
+
TestMainNoArgs (0.07s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.07s)

                                                
                                    
x
+
TestMinikubeProfile (98.34s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-171693 --driver=kvm2  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-171693 --driver=kvm2  --container-runtime=containerd: (46.42753131s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-174325 --driver=kvm2  --container-runtime=containerd
E0316 17:33:03.750120  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/addons-867363/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-174325 --driver=kvm2  --container-runtime=containerd: (49.028232719s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-171693
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-174325
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-174325" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-174325
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-174325: (1.047937448s)
helpers_test.go:175: Cleaning up "first-171693" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-171693
--- PASS: TestMinikubeProfile (98.34s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (28.05s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-651141 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-651141 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (27.053327479s)
--- PASS: TestMountStart/serial/StartWithMountFirst (28.05s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.43s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-651141 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-651141 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.43s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (28.68s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-668953 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-668953 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (27.679322186s)
--- PASS: TestMountStart/serial/StartWithMountSecond (28.68s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.42s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-668953 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-668953 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.42s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.7s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-651141 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.70s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.42s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-668953 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-668953 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.42s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.47s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-668953
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-668953: (1.468864029s)
--- PASS: TestMountStart/serial/Stop (1.47s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (24.22s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-668953
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-668953: (23.216715893s)
E0316 17:34:56.779635  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/functional-344728/client.crt: no such file or directory
--- PASS: TestMountStart/serial/RestartStopped (24.22s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.42s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-668953 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-668953 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.42s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (106.61s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-248386 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E0316 17:36:06.798280  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/addons-867363/client.crt: no such file or directory
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-248386 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (1m46.162681288s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-248386 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (106.61s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-248386 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-248386 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-248386 -- rollout status deployment/busybox: (2.358771646s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-248386 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-248386 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-248386 -- exec busybox-5b5d89c9d6-97h7c -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-248386 -- exec busybox-5b5d89c9d6-wzk7x -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-248386 -- exec busybox-5b5d89c9d6-97h7c -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-248386 -- exec busybox-5b5d89c9d6-wzk7x -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-248386 -- exec busybox-5b5d89c9d6-97h7c -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-248386 -- exec busybox-5b5d89c9d6-wzk7x -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.33s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-248386 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-248386 -- exec busybox-5b5d89c9d6-97h7c -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-248386 -- exec busybox-5b5d89c9d6-97h7c -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-248386 -- exec busybox-5b5d89c9d6-wzk7x -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-248386 -- exec busybox-5b5d89c9d6-wzk7x -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.97s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (42.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-248386 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-248386 -v 3 --alsologtostderr: (41.664600601s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-248386 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (42.31s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-248386 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (8.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-248386 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-248386 cp testdata/cp-test.txt multinode-248386:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-248386 ssh -n multinode-248386 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-248386 cp multinode-248386:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3964262997/001/cp-test_multinode-248386.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-248386 ssh -n multinode-248386 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-248386 cp multinode-248386:/home/docker/cp-test.txt multinode-248386-m02:/home/docker/cp-test_multinode-248386_multinode-248386-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-248386 ssh -n multinode-248386 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-248386 ssh -n multinode-248386-m02 "sudo cat /home/docker/cp-test_multinode-248386_multinode-248386-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-248386 cp multinode-248386:/home/docker/cp-test.txt multinode-248386-m03:/home/docker/cp-test_multinode-248386_multinode-248386-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-248386 ssh -n multinode-248386 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-248386 ssh -n multinode-248386-m03 "sudo cat /home/docker/cp-test_multinode-248386_multinode-248386-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-248386 cp testdata/cp-test.txt multinode-248386-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-248386 ssh -n multinode-248386-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-248386 cp multinode-248386-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3964262997/001/cp-test_multinode-248386-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-248386 ssh -n multinode-248386-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-248386 cp multinode-248386-m02:/home/docker/cp-test.txt multinode-248386:/home/docker/cp-test_multinode-248386-m02_multinode-248386.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-248386 ssh -n multinode-248386-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-248386 ssh -n multinode-248386 "sudo cat /home/docker/cp-test_multinode-248386-m02_multinode-248386.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-248386 cp multinode-248386-m02:/home/docker/cp-test.txt multinode-248386-m03:/home/docker/cp-test_multinode-248386-m02_multinode-248386-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-248386 ssh -n multinode-248386-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-248386 ssh -n multinode-248386-m03 "sudo cat /home/docker/cp-test_multinode-248386-m02_multinode-248386-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-248386 cp testdata/cp-test.txt multinode-248386-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-248386 ssh -n multinode-248386-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-248386 cp multinode-248386-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3964262997/001/cp-test_multinode-248386-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-248386 ssh -n multinode-248386-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-248386 cp multinode-248386-m03:/home/docker/cp-test.txt multinode-248386:/home/docker/cp-test_multinode-248386-m03_multinode-248386.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-248386 ssh -n multinode-248386-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-248386 ssh -n multinode-248386 "sudo cat /home/docker/cp-test_multinode-248386-m03_multinode-248386.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-248386 cp multinode-248386-m03:/home/docker/cp-test.txt multinode-248386-m02:/home/docker/cp-test_multinode-248386-m03_multinode-248386-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-248386 ssh -n multinode-248386-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-248386 ssh -n multinode-248386-m02 "sudo cat /home/docker/cp-test_multinode-248386-m03_multinode-248386-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (8.13s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.57s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-248386 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-248386 node stop m03: (1.631053682s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-248386 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-248386 status: exit status 7 (472.291652ms)

                                                
                                                
-- stdout --
	multinode-248386
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-248386-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-248386-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-248386 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-248386 status --alsologtostderr: exit status 7 (467.892138ms)

                                                
                                                
-- stdout --
	multinode-248386
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-248386-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-248386-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0316 17:37:43.666918  811378 out.go:291] Setting OutFile to fd 1 ...
	I0316 17:37:43.667041  811378 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0316 17:37:43.667049  811378 out.go:304] Setting ErrFile to fd 2...
	I0316 17:37:43.667053  811378 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0316 17:37:43.667239  811378 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18277-781196/.minikube/bin
	I0316 17:37:43.667434  811378 out.go:298] Setting JSON to false
	I0316 17:37:43.667513  811378 mustload.go:65] Loading cluster: multinode-248386
	I0316 17:37:43.667613  811378 notify.go:220] Checking for updates...
	I0316 17:37:43.667897  811378 config.go:182] Loaded profile config "multinode-248386": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0316 17:37:43.667914  811378 status.go:255] checking status of multinode-248386 ...
	I0316 17:37:43.668406  811378 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0316 17:37:43.668485  811378 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 17:37:43.688282  811378 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40873
	I0316 17:37:43.688940  811378 main.go:141] libmachine: () Calling .GetVersion
	I0316 17:37:43.689785  811378 main.go:141] libmachine: Using API Version  1
	I0316 17:37:43.689819  811378 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 17:37:43.690263  811378 main.go:141] libmachine: () Calling .GetMachineName
	I0316 17:37:43.690559  811378 main.go:141] libmachine: (multinode-248386) Calling .GetState
	I0316 17:37:43.692361  811378 status.go:330] multinode-248386 host status = "Running" (err=<nil>)
	I0316 17:37:43.692381  811378 host.go:66] Checking if "multinode-248386" exists ...
	I0316 17:37:43.692706  811378 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0316 17:37:43.692759  811378 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 17:37:43.709270  811378 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39785
	I0316 17:37:43.709895  811378 main.go:141] libmachine: () Calling .GetVersion
	I0316 17:37:43.710509  811378 main.go:141] libmachine: Using API Version  1
	I0316 17:37:43.710552  811378 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 17:37:43.710980  811378 main.go:141] libmachine: () Calling .GetMachineName
	I0316 17:37:43.711229  811378 main.go:141] libmachine: (multinode-248386) Calling .GetIP
	I0316 17:37:43.714531  811378 main.go:141] libmachine: (multinode-248386) DBG | domain multinode-248386 has defined MAC address 52:54:00:e1:6f:bc in network mk-multinode-248386
	I0316 17:37:43.714968  811378 main.go:141] libmachine: (multinode-248386) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:6f:bc", ip: ""} in network mk-multinode-248386: {Iface:virbr1 ExpiryTime:2024-03-16 18:35:15 +0000 UTC Type:0 Mac:52:54:00:e1:6f:bc Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:multinode-248386 Clientid:01:52:54:00:e1:6f:bc}
	I0316 17:37:43.715010  811378 main.go:141] libmachine: (multinode-248386) DBG | domain multinode-248386 has defined IP address 192.168.39.210 and MAC address 52:54:00:e1:6f:bc in network mk-multinode-248386
	I0316 17:37:43.715170  811378 host.go:66] Checking if "multinode-248386" exists ...
	I0316 17:37:43.715621  811378 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0316 17:37:43.715681  811378 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 17:37:43.733760  811378 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34643
	I0316 17:37:43.734306  811378 main.go:141] libmachine: () Calling .GetVersion
	I0316 17:37:43.734867  811378 main.go:141] libmachine: Using API Version  1
	I0316 17:37:43.734891  811378 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 17:37:43.735226  811378 main.go:141] libmachine: () Calling .GetMachineName
	I0316 17:37:43.735433  811378 main.go:141] libmachine: (multinode-248386) Calling .DriverName
	I0316 17:37:43.735782  811378 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0316 17:37:43.735809  811378 main.go:141] libmachine: (multinode-248386) Calling .GetSSHHostname
	I0316 17:37:43.738948  811378 main.go:141] libmachine: (multinode-248386) DBG | domain multinode-248386 has defined MAC address 52:54:00:e1:6f:bc in network mk-multinode-248386
	I0316 17:37:43.739484  811378 main.go:141] libmachine: (multinode-248386) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:6f:bc", ip: ""} in network mk-multinode-248386: {Iface:virbr1 ExpiryTime:2024-03-16 18:35:15 +0000 UTC Type:0 Mac:52:54:00:e1:6f:bc Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:multinode-248386 Clientid:01:52:54:00:e1:6f:bc}
	I0316 17:37:43.739520  811378 main.go:141] libmachine: (multinode-248386) DBG | domain multinode-248386 has defined IP address 192.168.39.210 and MAC address 52:54:00:e1:6f:bc in network mk-multinode-248386
	I0316 17:37:43.739679  811378 main.go:141] libmachine: (multinode-248386) Calling .GetSSHPort
	I0316 17:37:43.739888  811378 main.go:141] libmachine: (multinode-248386) Calling .GetSSHKeyPath
	I0316 17:37:43.740032  811378 main.go:141] libmachine: (multinode-248386) Calling .GetSSHUsername
	I0316 17:37:43.740218  811378 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18277-781196/.minikube/machines/multinode-248386/id_rsa Username:docker}
	I0316 17:37:43.825898  811378 ssh_runner.go:195] Run: systemctl --version
	I0316 17:37:43.833972  811378 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0316 17:37:43.851801  811378 kubeconfig.go:125] found "multinode-248386" server: "https://192.168.39.210:8443"
	I0316 17:37:43.851834  811378 api_server.go:166] Checking apiserver status ...
	I0316 17:37:43.851871  811378 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 17:37:43.868981  811378 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1170/cgroup
	W0316 17:37:43.880995  811378 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1170/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0316 17:37:43.881049  811378 ssh_runner.go:195] Run: ls
	I0316 17:37:43.886751  811378 api_server.go:253] Checking apiserver healthz at https://192.168.39.210:8443/healthz ...
	I0316 17:37:43.891344  811378 api_server.go:279] https://192.168.39.210:8443/healthz returned 200:
	ok
	I0316 17:37:43.891382  811378 status.go:422] multinode-248386 apiserver status = Running (err=<nil>)
	I0316 17:37:43.891393  811378 status.go:257] multinode-248386 status: &{Name:multinode-248386 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0316 17:37:43.891411  811378 status.go:255] checking status of multinode-248386-m02 ...
	I0316 17:37:43.891731  811378 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0316 17:37:43.891783  811378 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 17:37:43.907803  811378 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35403
	I0316 17:37:43.908313  811378 main.go:141] libmachine: () Calling .GetVersion
	I0316 17:37:43.908869  811378 main.go:141] libmachine: Using API Version  1
	I0316 17:37:43.908897  811378 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 17:37:43.909264  811378 main.go:141] libmachine: () Calling .GetMachineName
	I0316 17:37:43.909490  811378 main.go:141] libmachine: (multinode-248386-m02) Calling .GetState
	I0316 17:37:43.911214  811378 status.go:330] multinode-248386-m02 host status = "Running" (err=<nil>)
	I0316 17:37:43.911235  811378 host.go:66] Checking if "multinode-248386-m02" exists ...
	I0316 17:37:43.911560  811378 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0316 17:37:43.911608  811378 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 17:37:43.927248  811378 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35017
	I0316 17:37:43.927716  811378 main.go:141] libmachine: () Calling .GetVersion
	I0316 17:37:43.928243  811378 main.go:141] libmachine: Using API Version  1
	I0316 17:37:43.928265  811378 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 17:37:43.928612  811378 main.go:141] libmachine: () Calling .GetMachineName
	I0316 17:37:43.928810  811378 main.go:141] libmachine: (multinode-248386-m02) Calling .GetIP
	I0316 17:37:43.931768  811378 main.go:141] libmachine: (multinode-248386-m02) DBG | domain multinode-248386-m02 has defined MAC address 52:54:00:e8:3d:79 in network mk-multinode-248386
	I0316 17:37:43.932183  811378 main.go:141] libmachine: (multinode-248386-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:3d:79", ip: ""} in network mk-multinode-248386: {Iface:virbr1 ExpiryTime:2024-03-16 18:36:21 +0000 UTC Type:0 Mac:52:54:00:e8:3d:79 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:multinode-248386-m02 Clientid:01:52:54:00:e8:3d:79}
	I0316 17:37:43.932216  811378 main.go:141] libmachine: (multinode-248386-m02) DBG | domain multinode-248386-m02 has defined IP address 192.168.39.36 and MAC address 52:54:00:e8:3d:79 in network mk-multinode-248386
	I0316 17:37:43.932328  811378 host.go:66] Checking if "multinode-248386-m02" exists ...
	I0316 17:37:43.932639  811378 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0316 17:37:43.932682  811378 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 17:37:43.948862  811378 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33683
	I0316 17:37:43.949374  811378 main.go:141] libmachine: () Calling .GetVersion
	I0316 17:37:43.949899  811378 main.go:141] libmachine: Using API Version  1
	I0316 17:37:43.949925  811378 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 17:37:43.950243  811378 main.go:141] libmachine: () Calling .GetMachineName
	I0316 17:37:43.950397  811378 main.go:141] libmachine: (multinode-248386-m02) Calling .DriverName
	I0316 17:37:43.950573  811378 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0316 17:37:43.950594  811378 main.go:141] libmachine: (multinode-248386-m02) Calling .GetSSHHostname
	I0316 17:37:43.953863  811378 main.go:141] libmachine: (multinode-248386-m02) DBG | domain multinode-248386-m02 has defined MAC address 52:54:00:e8:3d:79 in network mk-multinode-248386
	I0316 17:37:43.954300  811378 main.go:141] libmachine: (multinode-248386-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:3d:79", ip: ""} in network mk-multinode-248386: {Iface:virbr1 ExpiryTime:2024-03-16 18:36:21 +0000 UTC Type:0 Mac:52:54:00:e8:3d:79 Iaid: IPaddr:192.168.39.36 Prefix:24 Hostname:multinode-248386-m02 Clientid:01:52:54:00:e8:3d:79}
	I0316 17:37:43.954340  811378 main.go:141] libmachine: (multinode-248386-m02) DBG | domain multinode-248386-m02 has defined IP address 192.168.39.36 and MAC address 52:54:00:e8:3d:79 in network mk-multinode-248386
	I0316 17:37:43.954532  811378 main.go:141] libmachine: (multinode-248386-m02) Calling .GetSSHPort
	I0316 17:37:43.954775  811378 main.go:141] libmachine: (multinode-248386-m02) Calling .GetSSHKeyPath
	I0316 17:37:43.954926  811378 main.go:141] libmachine: (multinode-248386-m02) Calling .GetSSHUsername
	I0316 17:37:43.955074  811378 sshutil.go:53] new ssh client: &{IP:192.168.39.36 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18277-781196/.minikube/machines/multinode-248386-m02/id_rsa Username:docker}
	I0316 17:37:44.035923  811378 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0316 17:37:44.051713  811378 status.go:257] multinode-248386-m02 status: &{Name:multinode-248386-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0316 17:37:44.051775  811378 status.go:255] checking status of multinode-248386-m03 ...
	I0316 17:37:44.052089  811378 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0316 17:37:44.052146  811378 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 17:37:44.068958  811378 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45281
	I0316 17:37:44.069532  811378 main.go:141] libmachine: () Calling .GetVersion
	I0316 17:37:44.070063  811378 main.go:141] libmachine: Using API Version  1
	I0316 17:37:44.070086  811378 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 17:37:44.070446  811378 main.go:141] libmachine: () Calling .GetMachineName
	I0316 17:37:44.070641  811378 main.go:141] libmachine: (multinode-248386-m03) Calling .GetState
	I0316 17:37:44.072328  811378 status.go:330] multinode-248386-m03 host status = "Stopped" (err=<nil>)
	I0316 17:37:44.072349  811378 status.go:343] host is not running, skipping remaining checks
	I0316 17:37:44.072357  811378 status.go:257] multinode-248386-m03 status: &{Name:multinode-248386-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.57s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (28.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-248386 node start m03 -v=7 --alsologtostderr
E0316 17:38:03.749880  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/addons-867363/client.crt: no such file or directory
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-248386 node start m03 -v=7 --alsologtostderr: (27.493250568s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-248386 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (28.18s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (312.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-248386
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-248386
E0316 17:39:56.779083  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/functional-344728/client.crt: no such file or directory
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-248386: (3m5.531068921s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-248386 --wait=true -v=8 --alsologtostderr
E0316 17:42:59.825444  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/functional-344728/client.crt: no such file or directory
E0316 17:43:03.749847  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/addons-867363/client.crt: no such file or directory
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-248386 --wait=true -v=8 --alsologtostderr: (2m6.520446268s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-248386
--- PASS: TestMultiNode/serial/RestartKeepsNodes (312.18s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.41s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-248386 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-248386 node delete m03: (1.808152952s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-248386 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.41s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (184.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-248386 stop
E0316 17:44:56.780988  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/functional-344728/client.crt: no such file or directory
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-248386 stop: (3m4.003563176s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-248386 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-248386 status: exit status 7 (108.805847ms)

                                                
                                                
-- stdout --
	multinode-248386
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-248386-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-248386 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-248386 status --alsologtostderr: exit status 7 (108.403089ms)

                                                
                                                
-- stdout --
	multinode-248386
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-248386-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0316 17:46:31.021233  813556 out.go:291] Setting OutFile to fd 1 ...
	I0316 17:46:31.021392  813556 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0316 17:46:31.021402  813556 out.go:304] Setting ErrFile to fd 2...
	I0316 17:46:31.021419  813556 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0316 17:46:31.021653  813556 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18277-781196/.minikube/bin
	I0316 17:46:31.021865  813556 out.go:298] Setting JSON to false
	I0316 17:46:31.021913  813556 mustload.go:65] Loading cluster: multinode-248386
	I0316 17:46:31.022046  813556 notify.go:220] Checking for updates...
	I0316 17:46:31.022317  813556 config.go:182] Loaded profile config "multinode-248386": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0316 17:46:31.022332  813556 status.go:255] checking status of multinode-248386 ...
	I0316 17:46:31.022750  813556 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0316 17:46:31.022859  813556 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 17:46:31.043520  813556 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39417
	I0316 17:46:31.044105  813556 main.go:141] libmachine: () Calling .GetVersion
	I0316 17:46:31.044750  813556 main.go:141] libmachine: Using API Version  1
	I0316 17:46:31.044782  813556 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 17:46:31.045194  813556 main.go:141] libmachine: () Calling .GetMachineName
	I0316 17:46:31.045464  813556 main.go:141] libmachine: (multinode-248386) Calling .GetState
	I0316 17:46:31.047206  813556 status.go:330] multinode-248386 host status = "Stopped" (err=<nil>)
	I0316 17:46:31.047225  813556 status.go:343] host is not running, skipping remaining checks
	I0316 17:46:31.047232  813556 status.go:257] multinode-248386 status: &{Name:multinode-248386 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0316 17:46:31.047286  813556 status.go:255] checking status of multinode-248386-m02 ...
	I0316 17:46:31.047614  813556 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0316 17:46:31.047656  813556 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 17:46:31.063548  813556 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39863
	I0316 17:46:31.064527  813556 main.go:141] libmachine: () Calling .GetVersion
	I0316 17:46:31.065230  813556 main.go:141] libmachine: Using API Version  1
	I0316 17:46:31.065296  813556 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 17:46:31.065698  813556 main.go:141] libmachine: () Calling .GetMachineName
	I0316 17:46:31.065927  813556 main.go:141] libmachine: (multinode-248386-m02) Calling .GetState
	I0316 17:46:31.067640  813556 status.go:330] multinode-248386-m02 host status = "Stopped" (err=<nil>)
	I0316 17:46:31.067662  813556 status.go:343] host is not running, skipping remaining checks
	I0316 17:46:31.067670  813556 status.go:257] multinode-248386-m02 status: &{Name:multinode-248386-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (184.22s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (83.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-248386 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-248386 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (1m23.07954929s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-248386 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (83.68s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (51.86s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-248386
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-248386-m02 --driver=kvm2  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-248386-m02 --driver=kvm2  --container-runtime=containerd: exit status 14 (79.263235ms)

                                                
                                                
-- stdout --
	* [multinode-248386-m02] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18277
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18277-781196/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18277-781196/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-248386-m02' is duplicated with machine name 'multinode-248386-m02' in profile 'multinode-248386'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-248386-m03 --driver=kvm2  --container-runtime=containerd
E0316 17:48:03.750744  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/addons-867363/client.crt: no such file or directory
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-248386-m03 --driver=kvm2  --container-runtime=containerd: (50.431549304s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-248386
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-248386: exit status 80 (250.047935ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-248386 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-248386-m03 already exists in multinode-248386-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_4.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-248386-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-248386-m03: (1.037420187s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (51.86s)

                                                
                                    
x
+
TestPreload (275.46s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-707859 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.24.4
E0316 17:49:56.779759  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/functional-344728/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-707859 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.24.4: (2m4.642120076s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-707859 image pull gcr.io/k8s-minikube/busybox
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-707859
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-707859: (1m32.470320802s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-707859 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=containerd
E0316 17:52:46.799473  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/addons-867363/client.crt: no such file or directory
E0316 17:53:03.750152  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/addons-867363/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-707859 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=containerd: (55.987111688s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-707859 image list
helpers_test.go:175: Cleaning up "test-preload-707859" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-707859
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-707859: (1.136541982s)
--- PASS: TestPreload (275.46s)

                                                
                                    
x
+
TestScheduledStopUnix (121.85s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-973669 --memory=2048 --driver=kvm2  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-973669 --memory=2048 --driver=kvm2  --container-runtime=containerd: (49.831629218s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-973669 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-973669 -n scheduled-stop-973669
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-973669 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-973669 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-973669 -n scheduled-stop-973669
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-973669
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-973669 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0316 17:54:56.779630  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/functional-344728/client.crt: no such file or directory
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-973669
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-973669: exit status 7 (84.722384ms)

                                                
                                                
-- stdout --
	scheduled-stop-973669
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-973669 -n scheduled-stop-973669
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-973669 -n scheduled-stop-973669: exit status 7 (87.413402ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-973669" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-973669
--- PASS: TestScheduledStopUnix (121.85s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (223.59s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.3133798259 start -p running-upgrade-762358 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.3133798259 start -p running-upgrade-762358 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd: (2m2.431812475s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-762358 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-762358 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m38.984405448s)
helpers_test.go:175: Cleaning up "running-upgrade-762358" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-762358
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-762358: (1.513761729s)
--- PASS: TestRunningBinaryUpgrade (223.59s)

                                                
                                    
x
+
TestKubernetesUpgrade (220.13s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-041287 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-041287 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m31.449354829s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-041287
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-041287: (2.388698457s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-041287 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-041287 status --format={{.Host}}: exit status 7 (109.776804ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-041287 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-041287 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m25.065168327s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-041287 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-041287 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-041287 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=containerd: exit status 106 (120.610501ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-041287] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18277
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18277-781196/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18277-781196/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.29.0-rc.2 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-041287
	    minikube start -p kubernetes-upgrade-041287 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-0412872 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.29.0-rc.2, by running:
	    
	    minikube start -p kubernetes-upgrade-041287 --kubernetes-version=v1.29.0-rc.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-041287 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-041287 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (39.537239265s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-041287" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-041287
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-041287: (1.380835208s)
--- PASS: TestKubernetesUpgrade (220.13s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-425873 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-425873 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=containerd: exit status 14 (104.752495ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-425873] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18277
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18277-781196/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18277-781196/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (103.36s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-425873 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-425873 --driver=kvm2  --container-runtime=containerd: (1m42.566536463s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-425873 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (103.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-376648 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-376648 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=containerd: exit status 14 (134.264822ms)

                                                
                                                
-- stdout --
	* [false-376648] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18277
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18277-781196/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18277-781196/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0316 17:55:29.346616  817583 out.go:291] Setting OutFile to fd 1 ...
	I0316 17:55:29.346776  817583 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0316 17:55:29.346795  817583 out.go:304] Setting ErrFile to fd 2...
	I0316 17:55:29.346802  817583 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0316 17:55:29.347133  817583 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18277-781196/.minikube/bin
	I0316 17:55:29.348057  817583 out.go:298] Setting JSON to false
	I0316 17:55:29.349478  817583 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":85077,"bootTime":1710526653,"procs":192,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0316 17:55:29.349578  817583 start.go:139] virtualization: kvm guest
	I0316 17:55:29.352163  817583 out.go:177] * [false-376648] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0316 17:55:29.353792  817583 out.go:177]   - MINIKUBE_LOCATION=18277
	I0316 17:55:29.355391  817583 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0316 17:55:29.353903  817583 notify.go:220] Checking for updates...
	I0316 17:55:29.357191  817583 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18277-781196/kubeconfig
	I0316 17:55:29.358579  817583 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18277-781196/.minikube
	I0316 17:55:29.359890  817583 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0316 17:55:29.361228  817583 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0316 17:55:29.362946  817583 config.go:182] Loaded profile config "NoKubernetes-425873": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0316 17:55:29.363050  817583 config.go:182] Loaded profile config "force-systemd-env-478037": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0316 17:55:29.363128  817583 config.go:182] Loaded profile config "offline-containerd-404386": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.28.4
	I0316 17:55:29.363227  817583 driver.go:392] Setting default libvirt URI to qemu:///system
	I0316 17:55:29.403551  817583 out.go:177] * Using the kvm2 driver based on user configuration
	I0316 17:55:29.405005  817583 start.go:297] selected driver: kvm2
	I0316 17:55:29.405035  817583 start.go:901] validating driver "kvm2" against <nil>
	I0316 17:55:29.405049  817583 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0316 17:55:29.407131  817583 out.go:177] 
	W0316 17:55:29.408478  817583 out.go:239] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0316 17:55:29.409810  817583 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-376648 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-376648

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-376648

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-376648

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-376648

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-376648

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-376648

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-376648

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-376648

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-376648

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-376648

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-376648" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-376648"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-376648" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-376648"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-376648" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-376648"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-376648

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-376648" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-376648"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-376648" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-376648"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-376648" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-376648" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-376648" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-376648" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-376648" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-376648" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-376648" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-376648" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-376648" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-376648"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-376648" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-376648"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-376648" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-376648"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-376648" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-376648"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-376648" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-376648"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-376648" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-376648" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-376648" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-376648" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-376648"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-376648" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-376648"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-376648" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-376648"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-376648" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-376648"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-376648" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-376648"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-376648

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-376648" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-376648"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-376648" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-376648"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-376648" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-376648"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-376648" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-376648"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-376648" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-376648"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-376648" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-376648"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-376648" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-376648"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-376648" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-376648"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-376648" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-376648"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-376648" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-376648"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-376648" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-376648"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-376648" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-376648"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-376648" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-376648"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-376648" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-376648"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-376648" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-376648"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-376648" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-376648"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-376648" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-376648"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-376648" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-376648"

                                                
                                                
----------------------- debugLogs end: false-376648 [took: 3.490254145s] --------------------------------
helpers_test.go:175: Cleaning up "false-376648" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-376648
--- PASS: TestNetworkPlugins/group/false (3.79s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.45s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.45s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (47.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-425873 --no-kubernetes --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-425873 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (45.997668668s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-425873 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-425873 status -o json: exit status 2 (258.251464ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-425873","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-425873
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (47.11s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (170.2s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.3363599942 start -p stopped-upgrade-623618 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.3363599942 start -p stopped-upgrade-623618 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd: (1m28.169495793s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.3363599942 -p stopped-upgrade-623618 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.3363599942 -p stopped-upgrade-623618 stop: (2.149440352s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-623618 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-623618 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m19.884481717s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (170.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (35.13s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-425873 --no-kubernetes --driver=kvm2  --container-runtime=containerd
E0316 17:58:03.750281  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/addons-867363/client.crt: no such file or directory
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-425873 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (35.130675354s)
--- PASS: TestNoKubernetes/serial/Start (35.13s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-425873 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-425873 "sudo systemctl is-active --quiet service kubelet": exit status 1 (260.52289ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (19.12s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (18.441703974s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (19.12s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.55s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-425873
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-425873: (1.554079491s)
--- PASS: TestNoKubernetes/serial/Stop (1.55s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (51.84s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-425873 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-425873 --driver=kvm2  --container-runtime=containerd: (51.840954855s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (51.84s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-425873 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-425873 "sudo systemctl is-active --quiet service kubelet": exit status 1 (230.690327ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.23s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.95s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-623618
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.95s)

                                                
                                    
x
+
TestPause/serial/Start (145.23s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-749062 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-749062 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=containerd: (2m25.228665577s)
--- PASS: TestPause/serial/Start (145.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (128.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-376648 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-376648 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=containerd: (2m8.654634921s)
--- PASS: TestNetworkPlugins/group/auto/Start (128.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (93.69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-376648 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-376648 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=containerd: (1m33.689932853s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (93.69s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (45.01s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-749062 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
E0316 18:03:03.750328  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/addons-867363/client.crt: no such file or directory
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-749062 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (44.969259206s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (45.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-cn4hq" [70e1d8f1-6c96-494f-bf79-cfad7c0a68b6] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.005211523s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-376648 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-376648 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-sqz6b" [a9e09445-ab08-4711-bd45-94c86c827e20] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-sqz6b" [a9e09445-ab08-4711-bd45-94c86c827e20] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.004794599s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-376648 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-376648 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-sqgg9" [165d5b48-f74d-4a77-a1d1-b4a6e9eb201b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-sqgg9" [165d5b48-f74d-4a77-a1d1-b4a6e9eb201b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.005022675s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.59s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-376648 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-376648 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-376648 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-376648 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-376648 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-376648 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                    
x
+
TestPause/serial/Pause (1.01s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-749062 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-amd64 pause -p pause-749062 --alsologtostderr -v=5: (1.014686356s)
--- PASS: TestPause/serial/Pause (1.01s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.34s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-749062 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-749062 --output=json --layout=cluster: exit status 2 (343.435214ms)

                                                
                                                
-- stdout --
	{"Name":"pause-749062","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-749062","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.34s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.93s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-749062 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.93s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.38s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-749062 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-amd64 pause -p pause-749062 --alsologtostderr -v=5: (1.376839953s)
--- PASS: TestPause/serial/PauseAgain (1.38s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.21s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-749062 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-749062 --alsologtostderr -v=5: (1.21442806s)
--- PASS: TestPause/serial/DeletePaused (1.21s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (2.03s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (2.027906085s)
--- PASS: TestPause/serial/VerifyDeletedResources (2.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (120.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-376648 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-376648 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=containerd: (2m0.762094379s)
--- PASS: TestNetworkPlugins/group/calico/Start (120.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (85.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-376648 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-376648 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=containerd: (1m25.550443861s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (85.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (122.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-376648 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-376648 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=containerd: (2m2.632845809s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (122.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (156.95s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-376648 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=containerd
E0316 18:04:56.779435  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/functional-344728/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-376648 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=containerd: (2m36.949663519s)
--- PASS: TestNetworkPlugins/group/flannel/Start (156.95s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-376648 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (12.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-376648 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-wk82t" [f4a31cae-b0de-4425-b26d-24b134c84d59] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-wk82t" [f4a31cae-b0de-4425-b26d-24b134c84d59] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 12.010409963s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (12.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-376648 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-376648 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-376648 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (108.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-376648 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-376648 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=containerd: (1m48.729778429s)
--- PASS: TestNetworkPlugins/group/bridge/Start (108.73s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-bgzzq" [97e31195-186a-46ab-b78e-0a14542506a3] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.008501207s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-376648 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-376648 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-7wgh5" [9e3cc56f-377c-41cf-93c5-4694ac93c8de] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-7wgh5" [9e3cc56f-377c-41cf-93c5-4694ac93c8de] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.006238093s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-376648 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-376648 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-r8nr9" [a1f0d551-81ea-464d-85c6-106942f03ca3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-r8nr9" [a1f0d551-81ea-464d-85c6-106942f03ca3] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.004636392s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-376648 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-376648 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-376648 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-376648 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-376648 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-376648 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (136.17s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-985498 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-985498 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.20.0: (2m16.165612767s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (136.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (149.77s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-738074 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-738074 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2: (2m29.771596801s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (149.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-f6s9q" [8928caaa-b9b9-4d67-9311-699dc11c5197] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.006198785s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-376648 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-376648 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-fdszx" [dab0e1f9-8109-438e-958c-89ba194277eb] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-fdszx" [dab0e1f9-8109-438e-958c-89ba194277eb] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.004745049s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-376648 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-376648 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-376648 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (71.91s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-831781 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.4
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-831781 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.4: (1m11.914295014s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (71.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-376648 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-376648 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-ccnc9" [7c6db6e1-2e12-4000-bdea-d081d0b1cda6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-ccnc9" [7c6db6e1-2e12-4000-bdea-d081d0b1cda6] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.013112648s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-376648 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-376648 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-376648 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (107.17s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-683490 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.4
E0316 18:08:03.750463  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/addons-867363/client.crt: no such file or directory
E0316 18:08:04.692221  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/kindnet-376648/client.crt: no such file or directory
E0316 18:08:04.697544  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/kindnet-376648/client.crt: no such file or directory
E0316 18:08:04.707987  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/kindnet-376648/client.crt: no such file or directory
E0316 18:08:04.728348  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/kindnet-376648/client.crt: no such file or directory
E0316 18:08:04.768686  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/kindnet-376648/client.crt: no such file or directory
E0316 18:08:04.849052  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/kindnet-376648/client.crt: no such file or directory
E0316 18:08:05.009700  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/kindnet-376648/client.crt: no such file or directory
E0316 18:08:05.330073  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/kindnet-376648/client.crt: no such file or directory
E0316 18:08:05.970298  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/kindnet-376648/client.crt: no such file or directory
E0316 18:08:07.251040  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/kindnet-376648/client.crt: no such file or directory
E0316 18:08:09.812156  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/kindnet-376648/client.crt: no such file or directory
E0316 18:08:14.591884  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/auto-376648/client.crt: no such file or directory
E0316 18:08:14.597203  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/auto-376648/client.crt: no such file or directory
E0316 18:08:14.607582  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/auto-376648/client.crt: no such file or directory
E0316 18:08:14.627921  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/auto-376648/client.crt: no such file or directory
E0316 18:08:14.668274  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/auto-376648/client.crt: no such file or directory
E0316 18:08:14.748805  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/auto-376648/client.crt: no such file or directory
E0316 18:08:14.909544  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/auto-376648/client.crt: no such file or directory
E0316 18:08:14.932877  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/kindnet-376648/client.crt: no such file or directory
E0316 18:08:15.230462  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/auto-376648/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-683490 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.4: (1m47.171071132s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (107.17s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (7.31s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-831781 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [af79d656-409d-4130-ab55-00fd7866d87e] Pending
E0316 18:08:15.871053  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/auto-376648/client.crt: no such file or directory
helpers_test.go:344: "busybox" [af79d656-409d-4130-ab55-00fd7866d87e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0316 18:08:17.151272  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/auto-376648/client.crt: no such file or directory
helpers_test.go:344: "busybox" [af79d656-409d-4130-ab55-00fd7866d87e] Running
E0316 18:08:19.712238  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/auto-376648/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 7.006251368s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-831781 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (7.31s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.35s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-831781 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-831781 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.253031998s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-831781 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.35s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (92.52s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-831781 --alsologtostderr -v=3
E0316 18:08:24.832766  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/auto-376648/client.crt: no such file or directory
E0316 18:08:25.173388  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/kindnet-376648/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-831781 --alsologtostderr -v=3: (1m32.514967367s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (92.52s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.56s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-985498 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [9d1a1153-d964-4893-aae0-6b926755edf4] Pending
helpers_test.go:344: "busybox" [9d1a1153-d964-4893-aae0-6b926755edf4] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [9d1a1153-d964-4893-aae0-6b926755edf4] Running
E0316 18:08:35.073868  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/auto-376648/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.004545861s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-985498 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.56s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-985498 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-985498 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.074642622s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-985498 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (92.53s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-985498 --alsologtostderr -v=3
E0316 18:08:45.653754  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/kindnet-376648/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-985498 --alsologtostderr -v=3: (1m32.525071796s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (92.53s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.35s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-738074 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [a4cb6c6e-fa3b-49b0-b175-626d20b137d8] Pending
helpers_test.go:344: "busybox" [a4cb6c6e-fa3b-49b0-b175-626d20b137d8] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [a4cb6c6e-fa3b-49b0-b175-626d20b137d8] Running
E0316 18:08:55.554256  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/auto-376648/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.003612891s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-738074 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.35s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-738074 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-738074 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.112051724s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-738074 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (92.53s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-738074 --alsologtostderr -v=3
E0316 18:09:26.615005  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/kindnet-376648/client.crt: no such file or directory
E0316 18:09:26.800432  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/addons-867363/client.crt: no such file or directory
E0316 18:09:36.514609  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/auto-376648/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-738074 --alsologtostderr -v=3: (1m32.52648551s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (92.53s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.33s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-683490 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [acaf5387-cd47-4050-86b8-fecf1fa30075] Pending
helpers_test.go:344: "busybox" [acaf5387-cd47-4050-86b8-fecf1fa30075] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [acaf5387-cd47-4050-86b8-fecf1fa30075] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.005168468s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-683490 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.33s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.34s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-683490 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-683490 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.254751648s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-683490 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.34s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (92.53s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-683490 --alsologtostderr -v=3
E0316 18:09:56.779855  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/functional-344728/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-683490 --alsologtostderr -v=3: (1m32.532897751s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (92.53s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-831781 -n embed-certs-831781
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-831781 -n embed-certs-831781: exit status 7 (83.369065ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-831781 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (324.99s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-831781 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.4
E0316 18:10:06.727642  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/custom-flannel-376648/client.crt: no such file or directory
E0316 18:10:06.733014  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/custom-flannel-376648/client.crt: no such file or directory
E0316 18:10:06.743372  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/custom-flannel-376648/client.crt: no such file or directory
E0316 18:10:06.763769  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/custom-flannel-376648/client.crt: no such file or directory
E0316 18:10:06.804216  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/custom-flannel-376648/client.crt: no such file or directory
E0316 18:10:06.885012  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/custom-flannel-376648/client.crt: no such file or directory
E0316 18:10:07.045505  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/custom-flannel-376648/client.crt: no such file or directory
E0316 18:10:07.365784  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/custom-flannel-376648/client.crt: no such file or directory
E0316 18:10:08.006679  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/custom-flannel-376648/client.crt: no such file or directory
E0316 18:10:09.286953  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/custom-flannel-376648/client.crt: no such file or directory
E0316 18:10:11.848073  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/custom-flannel-376648/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-831781 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.4: (5m24.629118075s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-831781 -n embed-certs-831781
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (324.99s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-985498 -n old-k8s-version-985498
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-985498 -n old-k8s-version-985498: exit status 7 (87.339951ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-985498 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-738074 -n no-preload-738074
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-738074 -n no-preload-738074: exit status 7 (105.629549ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-738074 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (329.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-738074 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2
E0316 18:10:41.302017  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/calico-376648/client.crt: no such file or directory
E0316 18:10:41.307358  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/calico-376648/client.crt: no such file or directory
E0316 18:10:41.317788  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/calico-376648/client.crt: no such file or directory
E0316 18:10:41.338807  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/calico-376648/client.crt: no such file or directory
E0316 18:10:41.379185  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/calico-376648/client.crt: no such file or directory
E0316 18:10:41.459577  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/calico-376648/client.crt: no such file or directory
E0316 18:10:41.620222  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/calico-376648/client.crt: no such file or directory
E0316 18:10:41.940986  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/calico-376648/client.crt: no such file or directory
E0316 18:10:42.581696  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/calico-376648/client.crt: no such file or directory
E0316 18:10:43.862429  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/calico-376648/client.crt: no such file or directory
E0316 18:10:46.414405  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/enable-default-cni-376648/client.crt: no such file or directory
E0316 18:10:46.419781  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/enable-default-cni-376648/client.crt: no such file or directory
E0316 18:10:46.423037  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/calico-376648/client.crt: no such file or directory
E0316 18:10:46.430260  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/enable-default-cni-376648/client.crt: no such file or directory
E0316 18:10:46.450654  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/enable-default-cni-376648/client.crt: no such file or directory
E0316 18:10:46.491010  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/enable-default-cni-376648/client.crt: no such file or directory
E0316 18:10:46.571434  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/enable-default-cni-376648/client.crt: no such file or directory
E0316 18:10:46.732002  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/enable-default-cni-376648/client.crt: no such file or directory
E0316 18:10:47.052668  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/enable-default-cni-376648/client.crt: no such file or directory
E0316 18:10:47.690427  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/custom-flannel-376648/client.crt: no such file or directory
E0316 18:10:47.693714  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/enable-default-cni-376648/client.crt: no such file or directory
E0316 18:10:48.535812  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/kindnet-376648/client.crt: no such file or directory
E0316 18:10:48.974679  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/enable-default-cni-376648/client.crt: no such file or directory
E0316 18:10:51.535591  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/enable-default-cni-376648/client.crt: no such file or directory
E0316 18:10:51.543791  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/calico-376648/client.crt: no such file or directory
E0316 18:10:56.656146  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/enable-default-cni-376648/client.crt: no such file or directory
E0316 18:10:58.434855  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/auto-376648/client.crt: no such file or directory
E0316 18:11:01.784077  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/calico-376648/client.crt: no such file or directory
E0316 18:11:06.897367  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/enable-default-cni-376648/client.crt: no such file or directory
E0316 18:11:22.264466  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/calico-376648/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-738074 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2: (5m28.836959093s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-738074 -n no-preload-738074
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (329.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-683490 -n default-k8s-diff-port-683490
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-683490 -n default-k8s-diff-port-683490: exit status 7 (87.632904ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-683490 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (300.04s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-683490 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.4
E0316 18:11:27.377794  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/enable-default-cni-376648/client.crt: no such file or directory
E0316 18:11:28.116264  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/flannel-376648/client.crt: no such file or directory
E0316 18:11:28.122130  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/flannel-376648/client.crt: no such file or directory
E0316 18:11:28.132527  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/flannel-376648/client.crt: no such file or directory
E0316 18:11:28.152881  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/flannel-376648/client.crt: no such file or directory
E0316 18:11:28.193234  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/flannel-376648/client.crt: no such file or directory
E0316 18:11:28.273583  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/flannel-376648/client.crt: no such file or directory
E0316 18:11:28.434072  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/flannel-376648/client.crt: no such file or directory
E0316 18:11:28.651172  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/custom-flannel-376648/client.crt: no such file or directory
E0316 18:11:28.754279  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/flannel-376648/client.crt: no such file or directory
E0316 18:11:29.395165  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/flannel-376648/client.crt: no such file or directory
E0316 18:11:30.676350  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/flannel-376648/client.crt: no such file or directory
E0316 18:11:33.237538  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/flannel-376648/client.crt: no such file or directory
E0316 18:11:38.358198  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/flannel-376648/client.crt: no such file or directory
E0316 18:11:48.598772  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/flannel-376648/client.crt: no such file or directory
E0316 18:12:03.225501  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/calico-376648/client.crt: no such file or directory
E0316 18:12:08.338962  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/enable-default-cni-376648/client.crt: no such file or directory
E0316 18:12:09.079232  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/flannel-376648/client.crt: no such file or directory
E0316 18:12:28.847707  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/bridge-376648/client.crt: no such file or directory
E0316 18:12:28.853060  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/bridge-376648/client.crt: no such file or directory
E0316 18:12:28.863460  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/bridge-376648/client.crt: no such file or directory
E0316 18:12:28.883890  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/bridge-376648/client.crt: no such file or directory
E0316 18:12:28.924268  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/bridge-376648/client.crt: no such file or directory
E0316 18:12:29.004689  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/bridge-376648/client.crt: no such file or directory
E0316 18:12:29.165503  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/bridge-376648/client.crt: no such file or directory
E0316 18:12:29.486438  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/bridge-376648/client.crt: no such file or directory
E0316 18:12:30.127525  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/bridge-376648/client.crt: no such file or directory
E0316 18:12:31.408039  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/bridge-376648/client.crt: no such file or directory
E0316 18:12:33.968326  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/bridge-376648/client.crt: no such file or directory
E0316 18:12:39.089386  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/bridge-376648/client.crt: no such file or directory
E0316 18:12:49.329878  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/bridge-376648/client.crt: no such file or directory
E0316 18:12:50.040333  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/flannel-376648/client.crt: no such file or directory
E0316 18:12:50.571973  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/custom-flannel-376648/client.crt: no such file or directory
E0316 18:13:03.750387  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/addons-867363/client.crt: no such file or directory
E0316 18:13:04.692234  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/kindnet-376648/client.crt: no such file or directory
E0316 18:13:09.810859  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/bridge-376648/client.crt: no such file or directory
E0316 18:13:14.591733  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/auto-376648/client.crt: no such file or directory
E0316 18:13:25.146087  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/calico-376648/client.crt: no such file or directory
E0316 18:13:30.259540  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/enable-default-cni-376648/client.crt: no such file or directory
E0316 18:13:32.376793  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/kindnet-376648/client.crt: no such file or directory
E0316 18:13:42.275579  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/auto-376648/client.crt: no such file or directory
E0316 18:13:50.771702  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/bridge-376648/client.crt: no such file or directory
E0316 18:14:11.961017  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/flannel-376648/client.crt: no such file or directory
E0316 18:14:56.779759  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/functional-344728/client.crt: no such file or directory
E0316 18:15:06.727923  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/custom-flannel-376648/client.crt: no such file or directory
E0316 18:15:12.692799  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/bridge-376648/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-683490 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.4: (4m59.719702193s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-683490 -n default-k8s-diff-port-683490
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (300.04s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (16.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-9t2qw" [5549c109-2bc6-4edb-83d3-f76a8f069b85] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-9t2qw" [5549c109-2bc6-4edb-83d3-f76a8f069b85] Running
E0316 18:15:34.412742  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/custom-flannel-376648/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 16.011244285s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (16.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-9t2qw" [5549c109-2bc6-4edb-83d3-f76a8f069b85] Running
E0316 18:15:41.302039  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/calico-376648/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.008253936s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-831781 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-831781 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.35s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-831781 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-831781 -n embed-certs-831781
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-831781 -n embed-certs-831781: exit status 2 (307.277698ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-831781 -n embed-certs-831781
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-831781 -n embed-certs-831781: exit status 2 (318.684231ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-831781 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-831781 -n embed-certs-831781
E0316 18:15:46.414874  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/enable-default-cni-376648/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-831781 -n embed-certs-831781
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.35s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (61.03s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-993416 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-993416 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2: (1m1.028264379s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (61.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (11.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-v5wz2" [99b76b79-83c8-4fa3-8271-ce3696b8d399] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-v5wz2" [99b76b79-83c8-4fa3-8271-ce3696b8d399] Running
E0316 18:16:08.986948  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/calico-376648/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 11.0053543s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (11.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-v5wz2" [99b76b79-83c8-4fa3-8271-ce3696b8d399] Running
E0316 18:16:14.100202  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/enable-default-cni-376648/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006111796s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-738074 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-738074 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.32s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (4.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-738074 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-amd64 pause -p no-preload-738074 --alsologtostderr -v=1: (1.355136606s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-738074 -n no-preload-738074
E0316 18:16:19.826935  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/functional-344728/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-738074 -n no-preload-738074: exit status 2 (331.459587ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-738074 -n no-preload-738074
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-738074 -n no-preload-738074: exit status 2 (317.317206ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-738074 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-amd64 unpause -p no-preload-738074 --alsologtostderr -v=1: (1.09764196s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-738074 -n no-preload-738074
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-738074 -n no-preload-738074
--- PASS: TestStartStop/group/no-preload/serial/Pause (4.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-pw7x5" [587ee489-78c6-422f-b877-d4b9d6eedc62] Running
E0316 18:16:28.117048  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/flannel-376648/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.007064005s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-pw7x5" [587ee489-78c6-422f-b877-d4b9d6eedc62] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005938054s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-683490 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-683490 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-683490 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-683490 -n default-k8s-diff-port-683490
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-683490 -n default-k8s-diff-port-683490: exit status 2 (288.37989ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-683490 -n default-k8s-diff-port-683490
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-683490 -n default-k8s-diff-port-683490: exit status 2 (294.542918ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-683490 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-683490 -n default-k8s-diff-port-683490
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-683490 -n default-k8s-diff-port-683490
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-993416 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-993416 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.276139289s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (2.46s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-993416 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-993416 --alsologtostderr -v=3: (2.455543186s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (2.46s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-993416 -n newest-cni-993416
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-993416 -n newest-cni-993416: exit status 7 (85.564399ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-993416 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (38.49s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-993416 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2
E0316 18:16:55.801704  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/flannel-376648/client.crt: no such file or directory
E0316 18:17:28.847626  788442 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18277-781196/.minikube/profiles/bridge-376648/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-993416 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.29.0-rc.2: (38.168740083s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-993416 -n newest-cni-993416
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (38.49s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-993416 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.9s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-993416 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-993416 -n newest-cni-993416
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-993416 -n newest-cni-993416: exit status 2 (286.914194ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-993416 -n newest-cni-993416
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-993416 -n newest-cni-993416: exit status 2 (279.529779ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-993416 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-993416 -n newest-cni-993416
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-993416 -n newest-cni-993416
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.90s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-656nk" [7a432aba-f5f5-467a-9ef2-39cf00edac55] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005326454s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-656nk" [7a432aba-f5f5-467a-9ef2-39cf00edac55] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004735695s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-985498 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-985498 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.83s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-985498 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-985498 -n old-k8s-version-985498
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-985498 -n old-k8s-version-985498: exit status 2 (279.902338ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-985498 -n old-k8s-version-985498
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-985498 -n old-k8s-version-985498: exit status 2 (282.137422ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-985498 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-985498 -n old-k8s-version-985498
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-985498 -n old-k8s-version-985498
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.83s)

                                                
                                    

Test skip (39/333)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.28.4/cached-images 0
15 TestDownloadOnly/v1.28.4/binaries 0
16 TestDownloadOnly/v1.28.4/kubectl 0
23 TestDownloadOnly/v1.29.0-rc.2/cached-images 0
24 TestDownloadOnly/v1.29.0-rc.2/binaries 0
25 TestDownloadOnly/v1.29.0-rc.2/kubectl 0
29 TestDownloadOnlyKic 0
43 TestAddons/parallel/Olm 0
56 TestDockerFlags 0
59 TestDockerEnvContainerd 0
61 TestHyperKitDriverInstallOrUpdate 0
62 TestHyperkitDriverSkipUpgrade 0
113 TestFunctional/parallel/DockerEnv 0
114 TestFunctional/parallel/PodmanEnv 0
134 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
135 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
136 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
137 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
138 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
139 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
140 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
141 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
162 TestGvisorAddon 0
184 TestImageBuild 0
211 TestKicCustomNetwork 0
212 TestKicExistingNetwork 0
213 TestKicCustomSubnet 0
214 TestKicStaticIP 0
246 TestChangeNoneUser 0
249 TestScheduledStopWindows 0
251 TestSkaffold 0
253 TestInsufficientStorage 0
257 TestMissingContainerUpgrade 0
262 TestNetworkPlugins/group/kubenet 3.67
271 TestNetworkPlugins/group/cilium 4.03
285 TestStartStop/group/disable-driver-mounts 0.2
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-376648 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-376648

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-376648

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-376648

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-376648

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-376648

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-376648

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-376648

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-376648

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-376648

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-376648

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-376648" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-376648"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-376648" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-376648"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-376648" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-376648"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-376648

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-376648" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-376648"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-376648" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-376648"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-376648" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-376648" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-376648" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-376648" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-376648" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-376648" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-376648" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-376648" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-376648" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-376648"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-376648" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-376648"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-376648" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-376648"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-376648" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-376648"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-376648" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-376648"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-376648" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-376648" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-376648" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-376648" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-376648"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-376648" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-376648"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-376648" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-376648"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-376648" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-376648"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-376648" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-376648"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-376648

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-376648" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-376648"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-376648" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-376648"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-376648" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-376648"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-376648" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-376648"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-376648" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-376648"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-376648" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-376648"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-376648" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-376648"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-376648" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-376648"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-376648" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-376648"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-376648" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-376648"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-376648" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-376648"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-376648" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-376648"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-376648" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-376648"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-376648" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-376648"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-376648" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-376648"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-376648" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-376648"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-376648" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-376648"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-376648" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-376648"

                                                
                                                
----------------------- debugLogs end: kubenet-376648 [took: 3.503972549s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-376648" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-376648
--- SKIP: TestNetworkPlugins/group/kubenet (3.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-376648 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-376648

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-376648

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-376648

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-376648

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-376648

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-376648

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-376648

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-376648

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-376648

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-376648

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-376648" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-376648"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-376648" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-376648"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-376648" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-376648"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-376648

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-376648" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-376648"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-376648" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-376648"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-376648" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-376648" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-376648" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-376648" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-376648" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-376648" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-376648" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-376648" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-376648" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-376648"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-376648" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-376648"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-376648" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-376648"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-376648" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-376648"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-376648" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-376648"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-376648

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-376648

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-376648" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-376648" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-376648

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-376648

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-376648" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-376648" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-376648" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-376648" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-376648" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-376648" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-376648"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-376648" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-376648"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-376648" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-376648"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-376648" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-376648"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-376648" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-376648"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-376648

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-376648" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-376648"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-376648" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-376648"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-376648" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-376648"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-376648" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-376648"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-376648" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-376648"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-376648" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-376648"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-376648" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-376648"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-376648" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-376648"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-376648" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-376648"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-376648" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-376648"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-376648" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-376648"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-376648" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-376648"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-376648" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-376648"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-376648" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-376648"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-376648" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-376648"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-376648" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-376648"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-376648" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-376648"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-376648" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-376648"

                                                
                                                
----------------------- debugLogs end: cilium-376648 [took: 3.863652248s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-376648" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-376648
--- SKIP: TestNetworkPlugins/group/cilium (4.03s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-923793" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-923793
--- SKIP: TestStartStop/group/disable-driver-mounts (0.20s)

                                                
                                    
Copied to clipboard