Test Report: KVM_Linux 17644

                    
                      406b3a49e2f2efe39684a1d536accd2e485fd514:2023-11-27:32048
                    
                

Test fail (6/322)

x
+
TestStartStop/group/embed-certs/serial/SecondStart (51.5s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-700864 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.28.4
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p embed-certs-700864 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.28.4: exit status 90 (51.20404592s)

                                                
                                                
-- stdout --
	* [embed-certs-700864] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17644
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17644-122411/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17644-122411/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting control plane node embed-certs-700864 in cluster embed-certs-700864
	* Restarting existing kvm2 VM for "embed-certs-700864" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1127 11:47:12.458576  175172 out.go:296] Setting OutFile to fd 1 ...
	I1127 11:47:12.458797  175172 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1127 11:47:12.458835  175172 out.go:309] Setting ErrFile to fd 2...
	I1127 11:47:12.458853  175172 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1127 11:47:12.459157  175172 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17644-122411/.minikube/bin
	I1127 11:47:12.459965  175172 out.go:303] Setting JSON to false
	I1127 11:47:12.461131  175172 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":5384,"bootTime":1701080249,"procs":239,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1127 11:47:12.461208  175172 start.go:138] virtualization: kvm guest
	I1127 11:47:12.463444  175172 out.go:177] * [embed-certs-700864] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1127 11:47:12.465410  175172 out.go:177]   - MINIKUBE_LOCATION=17644
	I1127 11:47:12.465527  175172 notify.go:220] Checking for updates...
	I1127 11:47:12.466816  175172 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1127 11:47:12.468550  175172 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17644-122411/kubeconfig
	I1127 11:47:12.470026  175172 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17644-122411/.minikube
	I1127 11:47:12.471459  175172 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1127 11:47:12.472767  175172 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1127 11:47:12.476713  175172 config.go:182] Loaded profile config "embed-certs-700864": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1127 11:47:12.477235  175172 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1127 11:47:12.477327  175172 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 11:47:12.498318  175172 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36493
	I1127 11:47:12.498816  175172 main.go:141] libmachine: () Calling .GetVersion
	I1127 11:47:12.499516  175172 main.go:141] libmachine: Using API Version  1
	I1127 11:47:12.499538  175172 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 11:47:12.499902  175172 main.go:141] libmachine: () Calling .GetMachineName
	I1127 11:47:12.500098  175172 main.go:141] libmachine: (embed-certs-700864) Calling .DriverName
	I1127 11:47:12.500336  175172 driver.go:378] Setting default libvirt URI to qemu:///system
	I1127 11:47:12.500764  175172 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1127 11:47:12.500814  175172 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 11:47:12.518653  175172 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37921
	I1127 11:47:12.519083  175172 main.go:141] libmachine: () Calling .GetVersion
	I1127 11:47:12.519615  175172 main.go:141] libmachine: Using API Version  1
	I1127 11:47:12.519640  175172 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 11:47:12.520012  175172 main.go:141] libmachine: () Calling .GetMachineName
	I1127 11:47:12.520187  175172 main.go:141] libmachine: (embed-certs-700864) Calling .DriverName
	I1127 11:47:12.561052  175172 out.go:177] * Using the kvm2 driver based on existing profile
	I1127 11:47:12.562655  175172 start.go:298] selected driver: kvm2
	I1127 11:47:12.562673  175172 start.go:902] validating driver "kvm2" against &{Name:embed-certs-700864 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17634/minikube-v1.32.1-1700142131-17634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:embed-certs-700864 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.152 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[]
ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1127 11:47:12.562808  175172 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1127 11:47:12.563668  175172 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1127 11:47:12.563770  175172 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17644-122411/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1127 11:47:12.581748  175172 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1127 11:47:12.582254  175172 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1127 11:47:12.582341  175172 cni.go:84] Creating CNI manager for ""
	I1127 11:47:12.582369  175172 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1127 11:47:12.582391  175172 start_flags.go:323] config:
	{Name:embed-certs-700864 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17634/minikube-v1.32.1-1700142131-17634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:embed-certs-700864 Namespace:default A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.152 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExp
iration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1127 11:47:12.582639  175172 iso.go:125] acquiring lock: {Name:mk7a2a8e57d33d30020383e75b407d4341747681 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1127 11:47:12.585337  175172 out.go:177] * Starting control plane node embed-certs-700864 in cluster embed-certs-700864
	I1127 11:47:12.586956  175172 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1127 11:47:12.586994  175172 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17644-122411/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I1127 11:47:12.587005  175172 cache.go:56] Caching tarball of preloaded images
	I1127 11:47:12.587090  175172 preload.go:174] Found /home/jenkins/minikube-integration/17644-122411/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1127 11:47:12.587108  175172 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I1127 11:47:12.587274  175172 profile.go:148] Saving config to /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/embed-certs-700864/config.json ...
	I1127 11:47:12.587507  175172 start.go:365] acquiring machines lock for embed-certs-700864: {Name:mkfbf5a28821d500d0d8d1f07fcf8da9a205c742 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1127 11:47:38.752521  175172 start.go:369] acquired machines lock for "embed-certs-700864" in 26.164976272s
	I1127 11:47:38.752604  175172 start.go:96] Skipping create...Using existing machine configuration
	I1127 11:47:38.752613  175172 fix.go:54] fixHost starting: 
	I1127 11:47:38.753139  175172 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1127 11:47:38.753200  175172 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 11:47:38.770459  175172 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40567
	I1127 11:47:38.770880  175172 main.go:141] libmachine: () Calling .GetVersion
	I1127 11:47:38.771363  175172 main.go:141] libmachine: Using API Version  1
	I1127 11:47:38.771390  175172 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 11:47:38.771702  175172 main.go:141] libmachine: () Calling .GetMachineName
	I1127 11:47:38.771862  175172 main.go:141] libmachine: (embed-certs-700864) Calling .DriverName
	I1127 11:47:38.771993  175172 main.go:141] libmachine: (embed-certs-700864) Calling .GetState
	I1127 11:47:38.773815  175172 fix.go:102] recreateIfNeeded on embed-certs-700864: state=Stopped err=<nil>
	I1127 11:47:38.773870  175172 main.go:141] libmachine: (embed-certs-700864) Calling .DriverName
	W1127 11:47:38.774003  175172 fix.go:128] unexpected machine state, will restart: <nil>
	I1127 11:47:38.776020  175172 out.go:177] * Restarting existing kvm2 VM for "embed-certs-700864" ...
	I1127 11:47:38.777463  175172 main.go:141] libmachine: (embed-certs-700864) Calling .Start
	I1127 11:47:38.777635  175172 main.go:141] libmachine: (embed-certs-700864) Ensuring networks are active...
	I1127 11:47:38.778274  175172 main.go:141] libmachine: (embed-certs-700864) Ensuring network default is active
	I1127 11:47:38.778640  175172 main.go:141] libmachine: (embed-certs-700864) Ensuring network mk-embed-certs-700864 is active
	I1127 11:47:38.779062  175172 main.go:141] libmachine: (embed-certs-700864) Getting domain xml...
	I1127 11:47:38.779810  175172 main.go:141] libmachine: (embed-certs-700864) Creating domain...
	I1127 11:47:40.177137  175172 main.go:141] libmachine: (embed-certs-700864) Waiting to get IP...
	I1127 11:47:40.178263  175172 main.go:141] libmachine: (embed-certs-700864) DBG | domain embed-certs-700864 has defined MAC address 52:54:00:ce:0e:7f in network mk-embed-certs-700864
	I1127 11:47:40.178817  175172 main.go:141] libmachine: (embed-certs-700864) DBG | unable to find current IP address of domain embed-certs-700864 in network mk-embed-certs-700864
	I1127 11:47:40.178971  175172 main.go:141] libmachine: (embed-certs-700864) DBG | I1127 11:47:40.178797  175510 retry.go:31] will retry after 253.631265ms: waiting for machine to come up
	I1127 11:47:40.434437  175172 main.go:141] libmachine: (embed-certs-700864) DBG | domain embed-certs-700864 has defined MAC address 52:54:00:ce:0e:7f in network mk-embed-certs-700864
	I1127 11:47:40.435101  175172 main.go:141] libmachine: (embed-certs-700864) DBG | unable to find current IP address of domain embed-certs-700864 in network mk-embed-certs-700864
	I1127 11:47:40.435133  175172 main.go:141] libmachine: (embed-certs-700864) DBG | I1127 11:47:40.435046  175510 retry.go:31] will retry after 281.267392ms: waiting for machine to come up
	I1127 11:47:40.717422  175172 main.go:141] libmachine: (embed-certs-700864) DBG | domain embed-certs-700864 has defined MAC address 52:54:00:ce:0e:7f in network mk-embed-certs-700864
	I1127 11:47:40.717834  175172 main.go:141] libmachine: (embed-certs-700864) DBG | unable to find current IP address of domain embed-certs-700864 in network mk-embed-certs-700864
	I1127 11:47:40.717906  175172 main.go:141] libmachine: (embed-certs-700864) DBG | I1127 11:47:40.717814  175510 retry.go:31] will retry after 485.584725ms: waiting for machine to come up
	I1127 11:47:41.205494  175172 main.go:141] libmachine: (embed-certs-700864) DBG | domain embed-certs-700864 has defined MAC address 52:54:00:ce:0e:7f in network mk-embed-certs-700864
	I1127 11:47:41.206157  175172 main.go:141] libmachine: (embed-certs-700864) DBG | unable to find current IP address of domain embed-certs-700864 in network mk-embed-certs-700864
	I1127 11:47:41.206199  175172 main.go:141] libmachine: (embed-certs-700864) DBG | I1127 11:47:41.206097  175510 retry.go:31] will retry after 388.200842ms: waiting for machine to come up
	I1127 11:47:41.596603  175172 main.go:141] libmachine: (embed-certs-700864) DBG | domain embed-certs-700864 has defined MAC address 52:54:00:ce:0e:7f in network mk-embed-certs-700864
	I1127 11:47:41.597302  175172 main.go:141] libmachine: (embed-certs-700864) DBG | unable to find current IP address of domain embed-certs-700864 in network mk-embed-certs-700864
	I1127 11:47:41.597340  175172 main.go:141] libmachine: (embed-certs-700864) DBG | I1127 11:47:41.597260  175510 retry.go:31] will retry after 676.758486ms: waiting for machine to come up
	I1127 11:47:42.276197  175172 main.go:141] libmachine: (embed-certs-700864) DBG | domain embed-certs-700864 has defined MAC address 52:54:00:ce:0e:7f in network mk-embed-certs-700864
	I1127 11:47:42.276820  175172 main.go:141] libmachine: (embed-certs-700864) DBG | unable to find current IP address of domain embed-certs-700864 in network mk-embed-certs-700864
	I1127 11:47:42.276853  175172 main.go:141] libmachine: (embed-certs-700864) DBG | I1127 11:47:42.276784  175510 retry.go:31] will retry after 679.44717ms: waiting for machine to come up
	I1127 11:47:42.957543  175172 main.go:141] libmachine: (embed-certs-700864) DBG | domain embed-certs-700864 has defined MAC address 52:54:00:ce:0e:7f in network mk-embed-certs-700864
	I1127 11:47:42.958085  175172 main.go:141] libmachine: (embed-certs-700864) DBG | unable to find current IP address of domain embed-certs-700864 in network mk-embed-certs-700864
	I1127 11:47:42.958104  175172 main.go:141] libmachine: (embed-certs-700864) DBG | I1127 11:47:42.958014  175510 retry.go:31] will retry after 842.638044ms: waiting for machine to come up
	I1127 11:47:43.802104  175172 main.go:141] libmachine: (embed-certs-700864) DBG | domain embed-certs-700864 has defined MAC address 52:54:00:ce:0e:7f in network mk-embed-certs-700864
	I1127 11:47:43.802697  175172 main.go:141] libmachine: (embed-certs-700864) DBG | unable to find current IP address of domain embed-certs-700864 in network mk-embed-certs-700864
	I1127 11:47:43.802767  175172 main.go:141] libmachine: (embed-certs-700864) DBG | I1127 11:47:43.802663  175510 retry.go:31] will retry after 1.088363212s: waiting for machine to come up
	I1127 11:47:44.892478  175172 main.go:141] libmachine: (embed-certs-700864) DBG | domain embed-certs-700864 has defined MAC address 52:54:00:ce:0e:7f in network mk-embed-certs-700864
	I1127 11:47:44.893073  175172 main.go:141] libmachine: (embed-certs-700864) DBG | unable to find current IP address of domain embed-certs-700864 in network mk-embed-certs-700864
	I1127 11:47:44.893106  175172 main.go:141] libmachine: (embed-certs-700864) DBG | I1127 11:47:44.892980  175510 retry.go:31] will retry after 1.215898752s: waiting for machine to come up
	I1127 11:47:46.110457  175172 main.go:141] libmachine: (embed-certs-700864) DBG | domain embed-certs-700864 has defined MAC address 52:54:00:ce:0e:7f in network mk-embed-certs-700864
	I1127 11:47:46.111058  175172 main.go:141] libmachine: (embed-certs-700864) DBG | unable to find current IP address of domain embed-certs-700864 in network mk-embed-certs-700864
	I1127 11:47:46.111087  175172 main.go:141] libmachine: (embed-certs-700864) DBG | I1127 11:47:46.110991  175510 retry.go:31] will retry after 2.172115928s: waiting for machine to come up
	I1127 11:47:48.284144  175172 main.go:141] libmachine: (embed-certs-700864) DBG | domain embed-certs-700864 has defined MAC address 52:54:00:ce:0e:7f in network mk-embed-certs-700864
	I1127 11:47:48.284739  175172 main.go:141] libmachine: (embed-certs-700864) DBG | unable to find current IP address of domain embed-certs-700864 in network mk-embed-certs-700864
	I1127 11:47:48.284764  175172 main.go:141] libmachine: (embed-certs-700864) DBG | I1127 11:47:48.284671  175510 retry.go:31] will retry after 2.821552368s: waiting for machine to come up
	I1127 11:47:51.108175  175172 main.go:141] libmachine: (embed-certs-700864) DBG | domain embed-certs-700864 has defined MAC address 52:54:00:ce:0e:7f in network mk-embed-certs-700864
	I1127 11:47:51.108740  175172 main.go:141] libmachine: (embed-certs-700864) DBG | unable to find current IP address of domain embed-certs-700864 in network mk-embed-certs-700864
	I1127 11:47:51.108770  175172 main.go:141] libmachine: (embed-certs-700864) DBG | I1127 11:47:51.108674  175510 retry.go:31] will retry after 3.375387624s: waiting for machine to come up
	I1127 11:47:54.485322  175172 main.go:141] libmachine: (embed-certs-700864) DBG | domain embed-certs-700864 has defined MAC address 52:54:00:ce:0e:7f in network mk-embed-certs-700864
	I1127 11:47:54.485771  175172 main.go:141] libmachine: (embed-certs-700864) DBG | unable to find current IP address of domain embed-certs-700864 in network mk-embed-certs-700864
	I1127 11:47:54.485798  175172 main.go:141] libmachine: (embed-certs-700864) DBG | I1127 11:47:54.485720  175510 retry.go:31] will retry after 3.81726379s: waiting for machine to come up
	I1127 11:47:58.304287  175172 main.go:141] libmachine: (embed-certs-700864) DBG | domain embed-certs-700864 has defined MAC address 52:54:00:ce:0e:7f in network mk-embed-certs-700864
	I1127 11:47:58.304785  175172 main.go:141] libmachine: (embed-certs-700864) Found IP for machine: 192.168.72.152
	I1127 11:47:58.304805  175172 main.go:141] libmachine: (embed-certs-700864) DBG | domain embed-certs-700864 has current primary IP address 192.168.72.152 and MAC address 52:54:00:ce:0e:7f in network mk-embed-certs-700864
	I1127 11:47:58.304823  175172 main.go:141] libmachine: (embed-certs-700864) Reserving static IP address...
	I1127 11:47:58.305239  175172 main.go:141] libmachine: (embed-certs-700864) DBG | found host DHCP lease matching {name: "embed-certs-700864", mac: "52:54:00:ce:0e:7f", ip: "192.168.72.152"} in network mk-embed-certs-700864: {Iface:virbr4 ExpiryTime:2023-11-27 12:45:42 +0000 UTC Type:0 Mac:52:54:00:ce:0e:7f Iaid: IPaddr:192.168.72.152 Prefix:24 Hostname:embed-certs-700864 Clientid:01:52:54:00:ce:0e:7f}
	I1127 11:47:58.305274  175172 main.go:141] libmachine: (embed-certs-700864) DBG | skip adding static IP to network mk-embed-certs-700864 - found existing host DHCP lease matching {name: "embed-certs-700864", mac: "52:54:00:ce:0e:7f", ip: "192.168.72.152"}
	I1127 11:47:58.305293  175172 main.go:141] libmachine: (embed-certs-700864) Reserved static IP address: 192.168.72.152
	I1127 11:47:58.305313  175172 main.go:141] libmachine: (embed-certs-700864) Waiting for SSH to be available...
	I1127 11:47:58.305331  175172 main.go:141] libmachine: (embed-certs-700864) DBG | Getting to WaitForSSH function...
	I1127 11:47:58.307464  175172 main.go:141] libmachine: (embed-certs-700864) DBG | domain embed-certs-700864 has defined MAC address 52:54:00:ce:0e:7f in network mk-embed-certs-700864
	I1127 11:47:58.307763  175172 main.go:141] libmachine: (embed-certs-700864) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:0e:7f", ip: ""} in network mk-embed-certs-700864: {Iface:virbr4 ExpiryTime:2023-11-27 12:45:42 +0000 UTC Type:0 Mac:52:54:00:ce:0e:7f Iaid: IPaddr:192.168.72.152 Prefix:24 Hostname:embed-certs-700864 Clientid:01:52:54:00:ce:0e:7f}
	I1127 11:47:58.307791  175172 main.go:141] libmachine: (embed-certs-700864) DBG | domain embed-certs-700864 has defined IP address 192.168.72.152 and MAC address 52:54:00:ce:0e:7f in network mk-embed-certs-700864
	I1127 11:47:58.307954  175172 main.go:141] libmachine: (embed-certs-700864) DBG | Using SSH client type: external
	I1127 11:47:58.307971  175172 main.go:141] libmachine: (embed-certs-700864) DBG | Using SSH private key: /home/jenkins/minikube-integration/17644-122411/.minikube/machines/embed-certs-700864/id_rsa (-rw-------)
	I1127 11:47:58.307990  175172 main.go:141] libmachine: (embed-certs-700864) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.152 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17644-122411/.minikube/machines/embed-certs-700864/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1127 11:47:58.308004  175172 main.go:141] libmachine: (embed-certs-700864) DBG | About to run SSH command:
	I1127 11:47:58.308053  175172 main.go:141] libmachine: (embed-certs-700864) DBG | exit 0
	I1127 11:47:58.394658  175172 main.go:141] libmachine: (embed-certs-700864) DBG | SSH cmd err, output: <nil>: 
	I1127 11:47:58.395036  175172 main.go:141] libmachine: (embed-certs-700864) Calling .GetConfigRaw
	I1127 11:47:58.395740  175172 main.go:141] libmachine: (embed-certs-700864) Calling .GetIP
	I1127 11:47:58.398261  175172 main.go:141] libmachine: (embed-certs-700864) DBG | domain embed-certs-700864 has defined MAC address 52:54:00:ce:0e:7f in network mk-embed-certs-700864
	I1127 11:47:58.398687  175172 main.go:141] libmachine: (embed-certs-700864) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:0e:7f", ip: ""} in network mk-embed-certs-700864: {Iface:virbr4 ExpiryTime:2023-11-27 12:45:42 +0000 UTC Type:0 Mac:52:54:00:ce:0e:7f Iaid: IPaddr:192.168.72.152 Prefix:24 Hostname:embed-certs-700864 Clientid:01:52:54:00:ce:0e:7f}
	I1127 11:47:58.398721  175172 main.go:141] libmachine: (embed-certs-700864) DBG | domain embed-certs-700864 has defined IP address 192.168.72.152 and MAC address 52:54:00:ce:0e:7f in network mk-embed-certs-700864
	I1127 11:47:58.398923  175172 profile.go:148] Saving config to /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/embed-certs-700864/config.json ...
	I1127 11:47:58.399107  175172 machine.go:88] provisioning docker machine ...
	I1127 11:47:58.399124  175172 main.go:141] libmachine: (embed-certs-700864) Calling .DriverName
	I1127 11:47:58.399331  175172 main.go:141] libmachine: (embed-certs-700864) Calling .GetMachineName
	I1127 11:47:58.399536  175172 buildroot.go:166] provisioning hostname "embed-certs-700864"
	I1127 11:47:58.399560  175172 main.go:141] libmachine: (embed-certs-700864) Calling .GetMachineName
	I1127 11:47:58.399743  175172 main.go:141] libmachine: (embed-certs-700864) Calling .GetSSHHostname
	I1127 11:47:58.401990  175172 main.go:141] libmachine: (embed-certs-700864) DBG | domain embed-certs-700864 has defined MAC address 52:54:00:ce:0e:7f in network mk-embed-certs-700864
	I1127 11:47:58.402322  175172 main.go:141] libmachine: (embed-certs-700864) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:0e:7f", ip: ""} in network mk-embed-certs-700864: {Iface:virbr4 ExpiryTime:2023-11-27 12:45:42 +0000 UTC Type:0 Mac:52:54:00:ce:0e:7f Iaid: IPaddr:192.168.72.152 Prefix:24 Hostname:embed-certs-700864 Clientid:01:52:54:00:ce:0e:7f}
	I1127 11:47:58.402350  175172 main.go:141] libmachine: (embed-certs-700864) DBG | domain embed-certs-700864 has defined IP address 192.168.72.152 and MAC address 52:54:00:ce:0e:7f in network mk-embed-certs-700864
	I1127 11:47:58.402477  175172 main.go:141] libmachine: (embed-certs-700864) Calling .GetSSHPort
	I1127 11:47:58.402651  175172 main.go:141] libmachine: (embed-certs-700864) Calling .GetSSHKeyPath
	I1127 11:47:58.402816  175172 main.go:141] libmachine: (embed-certs-700864) Calling .GetSSHKeyPath
	I1127 11:47:58.402950  175172 main.go:141] libmachine: (embed-certs-700864) Calling .GetSSHUsername
	I1127 11:47:58.403119  175172 main.go:141] libmachine: Using SSH client type: native
	I1127 11:47:58.403490  175172 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808940] 0x80b620 <nil>  [] 0s} 192.168.72.152 22 <nil> <nil>}
	I1127 11:47:58.403505  175172 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-700864 && echo "embed-certs-700864" | sudo tee /etc/hostname
	I1127 11:47:58.535277  175172 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-700864
	
	I1127 11:47:58.535315  175172 main.go:141] libmachine: (embed-certs-700864) Calling .GetSSHHostname
	I1127 11:47:58.538255  175172 main.go:141] libmachine: (embed-certs-700864) DBG | domain embed-certs-700864 has defined MAC address 52:54:00:ce:0e:7f in network mk-embed-certs-700864
	I1127 11:47:58.538592  175172 main.go:141] libmachine: (embed-certs-700864) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:0e:7f", ip: ""} in network mk-embed-certs-700864: {Iface:virbr4 ExpiryTime:2023-11-27 12:45:42 +0000 UTC Type:0 Mac:52:54:00:ce:0e:7f Iaid: IPaddr:192.168.72.152 Prefix:24 Hostname:embed-certs-700864 Clientid:01:52:54:00:ce:0e:7f}
	I1127 11:47:58.538622  175172 main.go:141] libmachine: (embed-certs-700864) DBG | domain embed-certs-700864 has defined IP address 192.168.72.152 and MAC address 52:54:00:ce:0e:7f in network mk-embed-certs-700864
	I1127 11:47:58.538727  175172 main.go:141] libmachine: (embed-certs-700864) Calling .GetSSHPort
	I1127 11:47:58.538927  175172 main.go:141] libmachine: (embed-certs-700864) Calling .GetSSHKeyPath
	I1127 11:47:58.539099  175172 main.go:141] libmachine: (embed-certs-700864) Calling .GetSSHKeyPath
	I1127 11:47:58.539227  175172 main.go:141] libmachine: (embed-certs-700864) Calling .GetSSHUsername
	I1127 11:47:58.539404  175172 main.go:141] libmachine: Using SSH client type: native
	I1127 11:47:58.539782  175172 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808940] 0x80b620 <nil>  [] 0s} 192.168.72.152 22 <nil> <nil>}
	I1127 11:47:58.539803  175172 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-700864' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-700864/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-700864' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1127 11:47:58.667493  175172 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1127 11:47:58.667528  175172 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17644-122411/.minikube CaCertPath:/home/jenkins/minikube-integration/17644-122411/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17644-122411/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17644-122411/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17644-122411/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17644-122411/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17644-122411/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17644-122411/.minikube}
	I1127 11:47:58.667562  175172 buildroot.go:174] setting up certificates
	I1127 11:47:58.667581  175172 provision.go:83] configureAuth start
	I1127 11:47:58.667596  175172 main.go:141] libmachine: (embed-certs-700864) Calling .GetMachineName
	I1127 11:47:58.667857  175172 main.go:141] libmachine: (embed-certs-700864) Calling .GetIP
	I1127 11:47:58.670399  175172 main.go:141] libmachine: (embed-certs-700864) DBG | domain embed-certs-700864 has defined MAC address 52:54:00:ce:0e:7f in network mk-embed-certs-700864
	I1127 11:47:58.670790  175172 main.go:141] libmachine: (embed-certs-700864) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:0e:7f", ip: ""} in network mk-embed-certs-700864: {Iface:virbr4 ExpiryTime:2023-11-27 12:45:42 +0000 UTC Type:0 Mac:52:54:00:ce:0e:7f Iaid: IPaddr:192.168.72.152 Prefix:24 Hostname:embed-certs-700864 Clientid:01:52:54:00:ce:0e:7f}
	I1127 11:47:58.670821  175172 main.go:141] libmachine: (embed-certs-700864) DBG | domain embed-certs-700864 has defined IP address 192.168.72.152 and MAC address 52:54:00:ce:0e:7f in network mk-embed-certs-700864
	I1127 11:47:58.670922  175172 main.go:141] libmachine: (embed-certs-700864) Calling .GetSSHHostname
	I1127 11:47:58.673227  175172 main.go:141] libmachine: (embed-certs-700864) DBG | domain embed-certs-700864 has defined MAC address 52:54:00:ce:0e:7f in network mk-embed-certs-700864
	I1127 11:47:58.673553  175172 main.go:141] libmachine: (embed-certs-700864) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:0e:7f", ip: ""} in network mk-embed-certs-700864: {Iface:virbr4 ExpiryTime:2023-11-27 12:45:42 +0000 UTC Type:0 Mac:52:54:00:ce:0e:7f Iaid: IPaddr:192.168.72.152 Prefix:24 Hostname:embed-certs-700864 Clientid:01:52:54:00:ce:0e:7f}
	I1127 11:47:58.673589  175172 main.go:141] libmachine: (embed-certs-700864) DBG | domain embed-certs-700864 has defined IP address 192.168.72.152 and MAC address 52:54:00:ce:0e:7f in network mk-embed-certs-700864
	I1127 11:47:58.673702  175172 provision.go:138] copyHostCerts
	I1127 11:47:58.673749  175172 exec_runner.go:144] found /home/jenkins/minikube-integration/17644-122411/.minikube/ca.pem, removing ...
	I1127 11:47:58.673755  175172 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17644-122411/.minikube/ca.pem
	I1127 11:47:58.673804  175172 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17644-122411/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17644-122411/.minikube/ca.pem (1078 bytes)
	I1127 11:47:58.673892  175172 exec_runner.go:144] found /home/jenkins/minikube-integration/17644-122411/.minikube/cert.pem, removing ...
	I1127 11:47:58.673905  175172 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17644-122411/.minikube/cert.pem
	I1127 11:47:58.673924  175172 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17644-122411/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17644-122411/.minikube/cert.pem (1123 bytes)
	I1127 11:47:58.673984  175172 exec_runner.go:144] found /home/jenkins/minikube-integration/17644-122411/.minikube/key.pem, removing ...
	I1127 11:47:58.673991  175172 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17644-122411/.minikube/key.pem
	I1127 11:47:58.674007  175172 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17644-122411/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17644-122411/.minikube/key.pem (1679 bytes)
	I1127 11:47:58.674065  175172 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17644-122411/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17644-122411/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17644-122411/.minikube/certs/ca-key.pem org=jenkins.embed-certs-700864 san=[192.168.72.152 192.168.72.152 localhost 127.0.0.1 minikube embed-certs-700864]
	I1127 11:47:58.838371  175172 provision.go:172] copyRemoteCerts
	I1127 11:47:58.838433  175172 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1127 11:47:58.838456  175172 main.go:141] libmachine: (embed-certs-700864) Calling .GetSSHHostname
	I1127 11:47:58.841251  175172 main.go:141] libmachine: (embed-certs-700864) DBG | domain embed-certs-700864 has defined MAC address 52:54:00:ce:0e:7f in network mk-embed-certs-700864
	I1127 11:47:58.841591  175172 main.go:141] libmachine: (embed-certs-700864) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:0e:7f", ip: ""} in network mk-embed-certs-700864: {Iface:virbr4 ExpiryTime:2023-11-27 12:45:42 +0000 UTC Type:0 Mac:52:54:00:ce:0e:7f Iaid: IPaddr:192.168.72.152 Prefix:24 Hostname:embed-certs-700864 Clientid:01:52:54:00:ce:0e:7f}
	I1127 11:47:58.841615  175172 main.go:141] libmachine: (embed-certs-700864) DBG | domain embed-certs-700864 has defined IP address 192.168.72.152 and MAC address 52:54:00:ce:0e:7f in network mk-embed-certs-700864
	I1127 11:47:58.841850  175172 main.go:141] libmachine: (embed-certs-700864) Calling .GetSSHPort
	I1127 11:47:58.842072  175172 main.go:141] libmachine: (embed-certs-700864) Calling .GetSSHKeyPath
	I1127 11:47:58.842214  175172 main.go:141] libmachine: (embed-certs-700864) Calling .GetSSHUsername
	I1127 11:47:58.842358  175172 sshutil.go:53] new ssh client: &{IP:192.168.72.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17644-122411/.minikube/machines/embed-certs-700864/id_rsa Username:docker}
	I1127 11:47:58.932518  175172 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17644-122411/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1127 11:47:58.954163  175172 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17644-122411/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1127 11:47:58.978151  175172 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17644-122411/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1127 11:47:59.000709  175172 provision.go:86] duration metric: configureAuth took 333.110733ms
	I1127 11:47:59.000741  175172 buildroot.go:189] setting minikube options for container-runtime
	I1127 11:47:59.001023  175172 config.go:182] Loaded profile config "embed-certs-700864": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1127 11:47:59.001056  175172 main.go:141] libmachine: (embed-certs-700864) Calling .DriverName
	I1127 11:47:59.001338  175172 main.go:141] libmachine: (embed-certs-700864) Calling .GetSSHHostname
	I1127 11:47:59.004260  175172 main.go:141] libmachine: (embed-certs-700864) DBG | domain embed-certs-700864 has defined MAC address 52:54:00:ce:0e:7f in network mk-embed-certs-700864
	I1127 11:47:59.004702  175172 main.go:141] libmachine: (embed-certs-700864) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:0e:7f", ip: ""} in network mk-embed-certs-700864: {Iface:virbr4 ExpiryTime:2023-11-27 12:45:42 +0000 UTC Type:0 Mac:52:54:00:ce:0e:7f Iaid: IPaddr:192.168.72.152 Prefix:24 Hostname:embed-certs-700864 Clientid:01:52:54:00:ce:0e:7f}
	I1127 11:47:59.004744  175172 main.go:141] libmachine: (embed-certs-700864) DBG | domain embed-certs-700864 has defined IP address 192.168.72.152 and MAC address 52:54:00:ce:0e:7f in network mk-embed-certs-700864
	I1127 11:47:59.004946  175172 main.go:141] libmachine: (embed-certs-700864) Calling .GetSSHPort
	I1127 11:47:59.005150  175172 main.go:141] libmachine: (embed-certs-700864) Calling .GetSSHKeyPath
	I1127 11:47:59.005333  175172 main.go:141] libmachine: (embed-certs-700864) Calling .GetSSHKeyPath
	I1127 11:47:59.005496  175172 main.go:141] libmachine: (embed-certs-700864) Calling .GetSSHUsername
	I1127 11:47:59.005703  175172 main.go:141] libmachine: Using SSH client type: native
	I1127 11:47:59.006057  175172 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808940] 0x80b620 <nil>  [] 0s} 192.168.72.152 22 <nil> <nil>}
	I1127 11:47:59.006071  175172 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1127 11:47:59.124235  175172 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1127 11:47:59.124259  175172 buildroot.go:70] root file system type: tmpfs
	I1127 11:47:59.124409  175172 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1127 11:47:59.124439  175172 main.go:141] libmachine: (embed-certs-700864) Calling .GetSSHHostname
	I1127 11:47:59.127029  175172 main.go:141] libmachine: (embed-certs-700864) DBG | domain embed-certs-700864 has defined MAC address 52:54:00:ce:0e:7f in network mk-embed-certs-700864
	I1127 11:47:59.127342  175172 main.go:141] libmachine: (embed-certs-700864) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:0e:7f", ip: ""} in network mk-embed-certs-700864: {Iface:virbr4 ExpiryTime:2023-11-27 12:45:42 +0000 UTC Type:0 Mac:52:54:00:ce:0e:7f Iaid: IPaddr:192.168.72.152 Prefix:24 Hostname:embed-certs-700864 Clientid:01:52:54:00:ce:0e:7f}
	I1127 11:47:59.127369  175172 main.go:141] libmachine: (embed-certs-700864) DBG | domain embed-certs-700864 has defined IP address 192.168.72.152 and MAC address 52:54:00:ce:0e:7f in network mk-embed-certs-700864
	I1127 11:47:59.127570  175172 main.go:141] libmachine: (embed-certs-700864) Calling .GetSSHPort
	I1127 11:47:59.127745  175172 main.go:141] libmachine: (embed-certs-700864) Calling .GetSSHKeyPath
	I1127 11:47:59.127916  175172 main.go:141] libmachine: (embed-certs-700864) Calling .GetSSHKeyPath
	I1127 11:47:59.128051  175172 main.go:141] libmachine: (embed-certs-700864) Calling .GetSSHUsername
	I1127 11:47:59.128239  175172 main.go:141] libmachine: Using SSH client type: native
	I1127 11:47:59.128621  175172 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808940] 0x80b620 <nil>  [] 0s} 192.168.72.152 22 <nil> <nil>}
	I1127 11:47:59.128685  175172 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1127 11:47:59.260322  175172 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1127 11:47:59.260356  175172 main.go:141] libmachine: (embed-certs-700864) Calling .GetSSHHostname
	I1127 11:47:59.263108  175172 main.go:141] libmachine: (embed-certs-700864) DBG | domain embed-certs-700864 has defined MAC address 52:54:00:ce:0e:7f in network mk-embed-certs-700864
	I1127 11:47:59.263432  175172 main.go:141] libmachine: (embed-certs-700864) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:0e:7f", ip: ""} in network mk-embed-certs-700864: {Iface:virbr4 ExpiryTime:2023-11-27 12:45:42 +0000 UTC Type:0 Mac:52:54:00:ce:0e:7f Iaid: IPaddr:192.168.72.152 Prefix:24 Hostname:embed-certs-700864 Clientid:01:52:54:00:ce:0e:7f}
	I1127 11:47:59.263460  175172 main.go:141] libmachine: (embed-certs-700864) DBG | domain embed-certs-700864 has defined IP address 192.168.72.152 and MAC address 52:54:00:ce:0e:7f in network mk-embed-certs-700864
	I1127 11:47:59.263643  175172 main.go:141] libmachine: (embed-certs-700864) Calling .GetSSHPort
	I1127 11:47:59.263862  175172 main.go:141] libmachine: (embed-certs-700864) Calling .GetSSHKeyPath
	I1127 11:47:59.264060  175172 main.go:141] libmachine: (embed-certs-700864) Calling .GetSSHKeyPath
	I1127 11:47:59.264232  175172 main.go:141] libmachine: (embed-certs-700864) Calling .GetSSHUsername
	I1127 11:47:59.264451  175172 main.go:141] libmachine: Using SSH client type: native
	I1127 11:47:59.264776  175172 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808940] 0x80b620 <nil>  [] 0s} 192.168.72.152 22 <nil> <nil>}
	I1127 11:47:59.264792  175172 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1127 11:48:00.233741  175172 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1127 11:48:00.233786  175172 machine.go:91] provisioned docker machine in 1.834649415s
	I1127 11:48:00.233799  175172 start.go:300] post-start starting for "embed-certs-700864" (driver="kvm2")
	I1127 11:48:00.233809  175172 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1127 11:48:00.233825  175172 main.go:141] libmachine: (embed-certs-700864) Calling .DriverName
	I1127 11:48:00.234196  175172 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1127 11:48:00.234233  175172 main.go:141] libmachine: (embed-certs-700864) Calling .GetSSHHostname
	I1127 11:48:00.236807  175172 main.go:141] libmachine: (embed-certs-700864) DBG | domain embed-certs-700864 has defined MAC address 52:54:00:ce:0e:7f in network mk-embed-certs-700864
	I1127 11:48:00.237197  175172 main.go:141] libmachine: (embed-certs-700864) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:0e:7f", ip: ""} in network mk-embed-certs-700864: {Iface:virbr4 ExpiryTime:2023-11-27 12:45:42 +0000 UTC Type:0 Mac:52:54:00:ce:0e:7f Iaid: IPaddr:192.168.72.152 Prefix:24 Hostname:embed-certs-700864 Clientid:01:52:54:00:ce:0e:7f}
	I1127 11:48:00.237230  175172 main.go:141] libmachine: (embed-certs-700864) DBG | domain embed-certs-700864 has defined IP address 192.168.72.152 and MAC address 52:54:00:ce:0e:7f in network mk-embed-certs-700864
	I1127 11:48:00.237391  175172 main.go:141] libmachine: (embed-certs-700864) Calling .GetSSHPort
	I1127 11:48:00.237603  175172 main.go:141] libmachine: (embed-certs-700864) Calling .GetSSHKeyPath
	I1127 11:48:00.237819  175172 main.go:141] libmachine: (embed-certs-700864) Calling .GetSSHUsername
	I1127 11:48:00.237979  175172 sshutil.go:53] new ssh client: &{IP:192.168.72.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17644-122411/.minikube/machines/embed-certs-700864/id_rsa Username:docker}
	I1127 11:48:00.333390  175172 ssh_runner.go:195] Run: cat /etc/os-release
	I1127 11:48:00.337236  175172 info.go:137] Remote host: Buildroot 2021.02.12
	I1127 11:48:00.337260  175172 filesync.go:126] Scanning /home/jenkins/minikube-integration/17644-122411/.minikube/addons for local assets ...
	I1127 11:48:00.337326  175172 filesync.go:126] Scanning /home/jenkins/minikube-integration/17644-122411/.minikube/files for local assets ...
	I1127 11:48:00.337442  175172 filesync.go:149] local asset: /home/jenkins/minikube-integration/17644-122411/.minikube/files/etc/ssl/certs/1296532.pem -> 1296532.pem in /etc/ssl/certs
	I1127 11:48:00.337555  175172 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1127 11:48:00.346424  175172 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17644-122411/.minikube/files/etc/ssl/certs/1296532.pem --> /etc/ssl/certs/1296532.pem (1708 bytes)
	I1127 11:48:00.367206  175172 start.go:303] post-start completed in 133.394365ms
	I1127 11:48:00.367228  175172 fix.go:56] fixHost completed within 21.61461526s
	I1127 11:48:00.367252  175172 main.go:141] libmachine: (embed-certs-700864) Calling .GetSSHHostname
	I1127 11:48:00.369931  175172 main.go:141] libmachine: (embed-certs-700864) DBG | domain embed-certs-700864 has defined MAC address 52:54:00:ce:0e:7f in network mk-embed-certs-700864
	I1127 11:48:00.370290  175172 main.go:141] libmachine: (embed-certs-700864) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:0e:7f", ip: ""} in network mk-embed-certs-700864: {Iface:virbr4 ExpiryTime:2023-11-27 12:45:42 +0000 UTC Type:0 Mac:52:54:00:ce:0e:7f Iaid: IPaddr:192.168.72.152 Prefix:24 Hostname:embed-certs-700864 Clientid:01:52:54:00:ce:0e:7f}
	I1127 11:48:00.370320  175172 main.go:141] libmachine: (embed-certs-700864) DBG | domain embed-certs-700864 has defined IP address 192.168.72.152 and MAC address 52:54:00:ce:0e:7f in network mk-embed-certs-700864
	I1127 11:48:00.370450  175172 main.go:141] libmachine: (embed-certs-700864) Calling .GetSSHPort
	I1127 11:48:00.370680  175172 main.go:141] libmachine: (embed-certs-700864) Calling .GetSSHKeyPath
	I1127 11:48:00.370837  175172 main.go:141] libmachine: (embed-certs-700864) Calling .GetSSHKeyPath
	I1127 11:48:00.370999  175172 main.go:141] libmachine: (embed-certs-700864) Calling .GetSSHUsername
	I1127 11:48:00.371120  175172 main.go:141] libmachine: Using SSH client type: native
	I1127 11:48:00.371481  175172 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808940] 0x80b620 <nil>  [] 0s} 192.168.72.152 22 <nil> <nil>}
	I1127 11:48:00.371496  175172 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1127 11:48:00.492050  175172 main.go:141] libmachine: SSH cmd err, output: <nil>: 1701085680.435741494
	
	I1127 11:48:00.492076  175172 fix.go:206] guest clock: 1701085680.435741494
	I1127 11:48:00.492085  175172 fix.go:219] Guest: 2023-11-27 11:48:00.435741494 +0000 UTC Remote: 2023-11-27 11:48:00.36723241 +0000 UTC m=+47.971566335 (delta=68.509084ms)
	I1127 11:48:00.492112  175172 fix.go:190] guest clock delta is within tolerance: 68.509084ms
	I1127 11:48:00.492123  175172 start.go:83] releasing machines lock for "embed-certs-700864", held for 21.739551694s
	I1127 11:48:00.492157  175172 main.go:141] libmachine: (embed-certs-700864) Calling .DriverName
	I1127 11:48:00.492465  175172 main.go:141] libmachine: (embed-certs-700864) Calling .GetIP
	I1127 11:48:00.495671  175172 main.go:141] libmachine: (embed-certs-700864) DBG | domain embed-certs-700864 has defined MAC address 52:54:00:ce:0e:7f in network mk-embed-certs-700864
	I1127 11:48:00.496044  175172 main.go:141] libmachine: (embed-certs-700864) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:0e:7f", ip: ""} in network mk-embed-certs-700864: {Iface:virbr4 ExpiryTime:2023-11-27 12:45:42 +0000 UTC Type:0 Mac:52:54:00:ce:0e:7f Iaid: IPaddr:192.168.72.152 Prefix:24 Hostname:embed-certs-700864 Clientid:01:52:54:00:ce:0e:7f}
	I1127 11:48:00.496081  175172 main.go:141] libmachine: (embed-certs-700864) DBG | domain embed-certs-700864 has defined IP address 192.168.72.152 and MAC address 52:54:00:ce:0e:7f in network mk-embed-certs-700864
	I1127 11:48:00.496221  175172 main.go:141] libmachine: (embed-certs-700864) Calling .DriverName
	I1127 11:48:00.496763  175172 main.go:141] libmachine: (embed-certs-700864) Calling .DriverName
	I1127 11:48:00.496972  175172 main.go:141] libmachine: (embed-certs-700864) Calling .DriverName
	I1127 11:48:00.497076  175172 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1127 11:48:00.497126  175172 main.go:141] libmachine: (embed-certs-700864) Calling .GetSSHHostname
	I1127 11:48:00.497215  175172 ssh_runner.go:195] Run: cat /version.json
	I1127 11:48:00.497247  175172 main.go:141] libmachine: (embed-certs-700864) Calling .GetSSHHostname
	I1127 11:48:00.499850  175172 main.go:141] libmachine: (embed-certs-700864) DBG | domain embed-certs-700864 has defined MAC address 52:54:00:ce:0e:7f in network mk-embed-certs-700864
	I1127 11:48:00.500130  175172 main.go:141] libmachine: (embed-certs-700864) DBG | domain embed-certs-700864 has defined MAC address 52:54:00:ce:0e:7f in network mk-embed-certs-700864
	I1127 11:48:00.500293  175172 main.go:141] libmachine: (embed-certs-700864) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:0e:7f", ip: ""} in network mk-embed-certs-700864: {Iface:virbr4 ExpiryTime:2023-11-27 12:45:42 +0000 UTC Type:0 Mac:52:54:00:ce:0e:7f Iaid: IPaddr:192.168.72.152 Prefix:24 Hostname:embed-certs-700864 Clientid:01:52:54:00:ce:0e:7f}
	I1127 11:48:00.500382  175172 main.go:141] libmachine: (embed-certs-700864) DBG | domain embed-certs-700864 has defined IP address 192.168.72.152 and MAC address 52:54:00:ce:0e:7f in network mk-embed-certs-700864
	I1127 11:48:00.500603  175172 main.go:141] libmachine: (embed-certs-700864) Calling .GetSSHPort
	I1127 11:48:00.500608  175172 main.go:141] libmachine: (embed-certs-700864) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:0e:7f", ip: ""} in network mk-embed-certs-700864: {Iface:virbr4 ExpiryTime:2023-11-27 12:45:42 +0000 UTC Type:0 Mac:52:54:00:ce:0e:7f Iaid: IPaddr:192.168.72.152 Prefix:24 Hostname:embed-certs-700864 Clientid:01:52:54:00:ce:0e:7f}
	I1127 11:48:00.500629  175172 main.go:141] libmachine: (embed-certs-700864) DBG | domain embed-certs-700864 has defined IP address 192.168.72.152 and MAC address 52:54:00:ce:0e:7f in network mk-embed-certs-700864
	I1127 11:48:00.500782  175172 main.go:141] libmachine: (embed-certs-700864) Calling .GetSSHKeyPath
	I1127 11:48:00.500841  175172 main.go:141] libmachine: (embed-certs-700864) Calling .GetSSHPort
	I1127 11:48:00.501026  175172 main.go:141] libmachine: (embed-certs-700864) Calling .GetSSHKeyPath
	I1127 11:48:00.501033  175172 main.go:141] libmachine: (embed-certs-700864) Calling .GetSSHUsername
	I1127 11:48:00.501214  175172 main.go:141] libmachine: (embed-certs-700864) Calling .GetSSHUsername
	I1127 11:48:00.501221  175172 sshutil.go:53] new ssh client: &{IP:192.168.72.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17644-122411/.minikube/machines/embed-certs-700864/id_rsa Username:docker}
	I1127 11:48:00.501354  175172 sshutil.go:53] new ssh client: &{IP:192.168.72.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17644-122411/.minikube/machines/embed-certs-700864/id_rsa Username:docker}
	I1127 11:48:00.625815  175172 ssh_runner.go:195] Run: systemctl --version
	I1127 11:48:00.633685  175172 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1127 11:48:00.641037  175172 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1127 11:48:00.641104  175172 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1127 11:48:00.659278  175172 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1127 11:48:00.659306  175172 start.go:472] detecting cgroup driver to use...
	I1127 11:48:00.659439  175172 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1127 11:48:00.680982  175172 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1127 11:48:00.694617  175172 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1127 11:48:00.710029  175172 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1127 11:48:00.710157  175172 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1127 11:48:00.724215  175172 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1127 11:48:00.737585  175172 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1127 11:48:00.747807  175172 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1127 11:48:00.760898  175172 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1127 11:48:00.772736  175172 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1127 11:48:00.783729  175172 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1127 11:48:00.793117  175172 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1127 11:48:00.803199  175172 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1127 11:48:00.914726  175172 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1127 11:48:00.935279  175172 start.go:472] detecting cgroup driver to use...
	I1127 11:48:00.935363  175172 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1127 11:48:00.950769  175172 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1127 11:48:00.968328  175172 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1127 11:48:00.994259  175172 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1127 11:48:01.010409  175172 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1127 11:48:01.025066  175172 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1127 11:48:01.060211  175172 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1127 11:48:01.074688  175172 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1127 11:48:01.093222  175172 ssh_runner.go:195] Run: which cri-dockerd
	I1127 11:48:01.097166  175172 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1127 11:48:01.105725  175172 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1127 11:48:01.122156  175172 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1127 11:48:01.228384  175172 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1127 11:48:01.359495  175172 docker.go:560] configuring docker to use "cgroupfs" as cgroup driver...
	I1127 11:48:01.359675  175172 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1127 11:48:01.379150  175172 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1127 11:48:01.503647  175172 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1127 11:48:03.073985  175172 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.570292884s)
	I1127 11:48:03.074079  175172 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1127 11:48:03.192075  175172 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1127 11:48:03.316900  175172 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1127 11:48:03.430103  175172 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1127 11:48:03.562475  175172 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1127 11:48:03.584181  175172 out.go:177] 
	W1127 11:48:03.585857  175172 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Job failed. See "journalctl -xe" for details.
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Job failed. See "journalctl -xe" for details.
	
	W1127 11:48:03.585885  175172 out.go:239] * 
	* 
	W1127 11:48:03.586906  175172 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1127 11:48:03.588282  175172 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p embed-certs-700864 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.28.4": exit status 90
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-700864 -n embed-certs-700864
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-700864 -n embed-certs-700864: exit status 6 (284.590864ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1127 11:48:03.870800  175730 status.go:415] kubeconfig endpoint: extract IP: "embed-certs-700864" does not appear in /home/jenkins/minikube-integration/17644-122411/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "embed-certs-700864" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (51.50s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-700864" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-700864 -n embed-certs-700864
E1127 11:48:04.045146  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/addons-097795/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-700864 -n embed-certs-700864: exit status 6 (288.335709ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1127 11:48:04.152453  175758 status.go:415] kubeconfig endpoint: extract IP: "embed-certs-700864" does not appear in /home/jenkins/minikube-integration/17644-122411/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "embed-certs-700864" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.33s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-700864" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-700864 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-700864 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (54.691378ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-700864" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-700864 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-700864 -n embed-certs-700864
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-700864 -n embed-certs-700864: exit status 6 (278.644267ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1127 11:48:04.493663  175796 status.go:415] kubeconfig endpoint: extract IP: "embed-certs-700864" does not appear in /home/jenkins/minikube-integration/17644-122411/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "embed-certs-700864" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.33s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (2.55s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p embed-certs-700864 "sudo crictl images -o json"
E1127 11:48:06.545836  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/false-128714/client.crt: no such file or directory
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p embed-certs-700864 "sudo crictl images -o json": exit status 1 (2.261812415s)

                                                
                                                
-- stdout --
	FATA[0002] validate service connection: validate CRI v1 image API for endpoint "unix:///var/run/cri-dockerd.sock": rpc error: code = DeadlineExceeded desc = context deadline exceeded 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
start_stop_delete_test.go:304: failed to get images inside minikube. args "out/minikube-linux-amd64 ssh -p embed-certs-700864 \"sudo crictl images -o json\"": exit status 1
start_stop_delete_test.go:304: failed to decode images json invalid character '\x1b' looking for beginning of value. output:
FATA[0002] validate service connection: validate CRI v1 image API for endpoint "unix:///var/run/cri-dockerd.sock": rpc error: code = DeadlineExceeded desc = context deadline exceeded 
start_stop_delete_test.go:304: v1.28.4 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.10.1",
- 	"registry.k8s.io/etcd:3.5.9-0",
- 	"registry.k8s.io/kube-apiserver:v1.28.4",
- 	"registry.k8s.io/kube-controller-manager:v1.28.4",
- 	"registry.k8s.io/kube-proxy:v1.28.4",
- 	"registry.k8s.io/kube-scheduler:v1.28.4",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-700864 -n embed-certs-700864
E1127 11:48:06.964562  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/gvisor-691176/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-700864 -n embed-certs-700864: exit status 6 (283.925413ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1127 11:48:07.038275  175857 status.go:415] kubeconfig endpoint: extract IP: "embed-certs-700864" does not appear in /home/jenkins/minikube-integration/17644-122411/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "embed-certs-700864" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (2.55s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.74s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-700864 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p embed-certs-700864 --alsologtostderr -v=1: exit status 80 (2.158050799s)

                                                
                                                
-- stdout --
	* Pausing node embed-certs-700864 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1127 11:48:07.129692  175885 out.go:296] Setting OutFile to fd 1 ...
	I1127 11:48:07.129999  175885 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1127 11:48:07.130010  175885 out.go:309] Setting ErrFile to fd 2...
	I1127 11:48:07.130017  175885 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1127 11:48:07.130321  175885 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17644-122411/.minikube/bin
	I1127 11:48:07.130667  175885 out.go:303] Setting JSON to false
	I1127 11:48:07.130700  175885 mustload.go:65] Loading cluster: embed-certs-700864
	I1127 11:48:07.131225  175885 config.go:182] Loaded profile config "embed-certs-700864": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1127 11:48:07.131698  175885 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1127 11:48:07.131741  175885 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 11:48:07.147121  175885 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39559
	I1127 11:48:07.147630  175885 main.go:141] libmachine: () Calling .GetVersion
	I1127 11:48:07.148216  175885 main.go:141] libmachine: Using API Version  1
	I1127 11:48:07.148237  175885 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 11:48:07.148594  175885 main.go:141] libmachine: () Calling .GetMachineName
	I1127 11:48:07.148789  175885 main.go:141] libmachine: (embed-certs-700864) Calling .GetState
	I1127 11:48:07.150594  175885 host.go:66] Checking if "embed-certs-700864" exists ...
	I1127 11:48:07.150977  175885 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1127 11:48:07.151017  175885 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 11:48:07.165586  175885 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43791
	I1127 11:48:07.166024  175885 main.go:141] libmachine: () Calling .GetVersion
	I1127 11:48:07.166482  175885 main.go:141] libmachine: Using API Version  1
	I1127 11:48:07.166505  175885 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 11:48:07.166939  175885 main.go:141] libmachine: () Calling .GetMachineName
	I1127 11:48:07.167194  175885 main.go:141] libmachine: (embed-certs-700864) Calling .DriverName
	I1127 11:48:07.168361  175885 pause.go:58] "namespaces" [kube-system kubernetes-dashboard storage-gluster istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-cni:%!s(bool=false)
extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/17634/minikube-v1.32.1-1700142131-17634-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.32.1-1700142131-17634/minikube-v1.32.1-1700142131-17634-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.32.1-1700142131-17634-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qemu-uri:qemu:///system listen-address: m
axauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string:/home/jenkins:/minikube-host mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:embed-certs-700864 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) wantvirtu
alboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1127 11:48:07.170668  175885 out.go:177] * Pausing node embed-certs-700864 ... 
	I1127 11:48:07.172006  175885 host.go:66] Checking if "embed-certs-700864" exists ...
	I1127 11:48:07.172418  175885 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1127 11:48:07.172467  175885 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 11:48:07.189945  175885 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44457
	I1127 11:48:07.190416  175885 main.go:141] libmachine: () Calling .GetVersion
	I1127 11:48:07.190977  175885 main.go:141] libmachine: Using API Version  1
	I1127 11:48:07.191010  175885 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 11:48:07.191349  175885 main.go:141] libmachine: () Calling .GetMachineName
	I1127 11:48:07.191564  175885 main.go:141] libmachine: (embed-certs-700864) Calling .DriverName
	I1127 11:48:07.191753  175885 ssh_runner.go:195] Run: systemctl --version
	I1127 11:48:07.191781  175885 main.go:141] libmachine: (embed-certs-700864) Calling .GetSSHHostname
	I1127 11:48:07.194825  175885 main.go:141] libmachine: (embed-certs-700864) DBG | domain embed-certs-700864 has defined MAC address 52:54:00:ce:0e:7f in network mk-embed-certs-700864
	I1127 11:48:07.195244  175885 main.go:141] libmachine: (embed-certs-700864) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:0e:7f", ip: ""} in network mk-embed-certs-700864: {Iface:virbr4 ExpiryTime:2023-11-27 12:45:42 +0000 UTC Type:0 Mac:52:54:00:ce:0e:7f Iaid: IPaddr:192.168.72.152 Prefix:24 Hostname:embed-certs-700864 Clientid:01:52:54:00:ce:0e:7f}
	I1127 11:48:07.195288  175885 main.go:141] libmachine: (embed-certs-700864) DBG | domain embed-certs-700864 has defined IP address 192.168.72.152 and MAC address 52:54:00:ce:0e:7f in network mk-embed-certs-700864
	I1127 11:48:07.195483  175885 main.go:141] libmachine: (embed-certs-700864) Calling .GetSSHPort
	I1127 11:48:07.195649  175885 main.go:141] libmachine: (embed-certs-700864) Calling .GetSSHKeyPath
	I1127 11:48:07.195835  175885 main.go:141] libmachine: (embed-certs-700864) Calling .GetSSHUsername
	I1127 11:48:07.195981  175885 sshutil.go:53] new ssh client: &{IP:192.168.72.152 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17644-122411/.minikube/machines/embed-certs-700864/id_rsa Username:docker}
	I1127 11:48:07.290731  175885 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1127 11:48:07.305760  175885 pause.go:51] kubelet running: false
	I1127 11:48:07.305819  175885 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1127 11:48:07.322575  175885 retry.go:31] will retry after 197.683321ms: kubelet disable --now: sudo systemctl disable --now kubelet: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file kubelet.service does not exist.
	I1127 11:48:07.521001  175885 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1127 11:48:07.537038  175885 pause.go:51] kubelet running: false
	I1127 11:48:07.537113  175885 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1127 11:48:07.553922  175885 retry.go:31] will retry after 259.930939ms: kubelet disable --now: sudo systemctl disable --now kubelet: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file kubelet.service does not exist.
	I1127 11:48:07.814337  175885 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1127 11:48:07.828341  175885 pause.go:51] kubelet running: false
	I1127 11:48:07.828421  175885 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1127 11:48:07.845251  175885 retry.go:31] will retry after 732.073601ms: kubelet disable --now: sudo systemctl disable --now kubelet: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file kubelet.service does not exist.
	I1127 11:48:08.578217  175885 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1127 11:48:08.591886  175885 pause.go:51] kubelet running: false
	I1127 11:48:08.591941  175885 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1127 11:48:08.605236  175885 retry.go:31] will retry after 562.092566ms: kubelet disable --now: sudo systemctl disable --now kubelet: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file kubelet.service does not exist.
	I1127 11:48:09.168026  175885 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1127 11:48:09.184783  175885 pause.go:51] kubelet running: false
	I1127 11:48:09.184862  175885 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1127 11:48:09.204684  175885 out.go:177] 
	W1127 11:48:09.206033  175885 out.go:239] X Exiting due to GUEST_PAUSE: Pause: kubelet disable --now: sudo systemctl disable --now kubelet: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file kubelet.service does not exist.
	
	X Exiting due to GUEST_PAUSE: Pause: kubelet disable --now: sudo systemctl disable --now kubelet: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file kubelet.service does not exist.
	
	W1127 11:48:09.206080  175885 out.go:239] * 
	* 
	W1127 11:48:09.209759  175885 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1127 11:48:09.211250  175885 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-linux-amd64 pause -p embed-certs-700864 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-700864 -n embed-certs-700864
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-700864 -n embed-certs-700864: exit status 6 (291.358342ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1127 11:48:09.487737  175925 status.go:415] kubeconfig endpoint: extract IP: "embed-certs-700864" does not appear in /home/jenkins/minikube-integration/17644-122411/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "embed-certs-700864" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-700864 -n embed-certs-700864
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-700864 -n embed-certs-700864: exit status 6 (294.342355ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1127 11:48:09.785414  175955 status.go:415] kubeconfig endpoint: extract IP: "embed-certs-700864" does not appear in /home/jenkins/minikube-integration/17644-122411/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "embed-certs-700864" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (2.74s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (2.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p old-k8s-version-337707 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p old-k8s-version-337707 "sudo crictl images -o json": exit status 1 (261.151199ms)

                                                
                                                
-- stdout --
	FATA[0000] validate service connection: validate CRI v1 image API for endpoint "unix:///var/run/dockershim.sock": rpc error: code = Unimplemented desc = unknown service runtime.v1.ImageService 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
start_stop_delete_test.go:304: failed to get images inside minikube. args "out/minikube-linux-amd64 ssh -p old-k8s-version-337707 \"sudo crictl images -o json\"": exit status 1
start_stop_delete_test.go:304: failed to decode images json invalid character '\x1b' looking for beginning of value. output:
FATA[0000] validate service connection: validate CRI v1 image API for endpoint "unix:///var/run/dockershim.sock": rpc error: code = Unimplemented desc = unknown service runtime.v1.ImageService 
start_stop_delete_test.go:304: v1.16.0 images missing (-want +got):
  []string{
- 	"k8s.gcr.io/coredns:1.6.2",
- 	"k8s.gcr.io/etcd:3.3.15-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.16.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.16.0",
- 	"k8s.gcr.io/kube-proxy:v1.16.0",
- 	"k8s.gcr.io/kube-scheduler:v1.16.0",
- 	"k8s.gcr.io/pause:3.1",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-337707 -n old-k8s-version-337707
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-337707 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-337707 logs -n 25: (1.002001652s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p embed-certs-700864 sudo                             | embed-certs-700864           | jenkins | v1.32.0 | 27 Nov 23 11:48 UTC |                     |
	|         | crictl images -o json                                  |                              |         |         |                     |                     |
	| pause   | -p embed-certs-700864                                  | embed-certs-700864           | jenkins | v1.32.0 | 27 Nov 23 11:48 UTC |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p embed-certs-700864                                  | embed-certs-700864           | jenkins | v1.32.0 | 27 Nov 23 11:48 UTC | 27 Nov 23 11:48 UTC |
	| delete  | -p embed-certs-700864                                  | embed-certs-700864           | jenkins | v1.32.0 | 27 Nov 23 11:48 UTC | 27 Nov 23 11:48 UTC |
	| start   | -p newest-cni-693564 --memory=2200 --alsologtostderr   | newest-cni-693564            | jenkins | v1.32.0 | 27 Nov 23 11:48 UTC | 27 Nov 23 11:49 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --kubernetes-version=v1.28.4            |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-693564             | newest-cni-693564            | jenkins | v1.32.0 | 27 Nov 23 11:49 UTC | 27 Nov 23 11:49 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-693564                                   | newest-cni-693564            | jenkins | v1.32.0 | 27 Nov 23 11:49 UTC | 27 Nov 23 11:49 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-693564                  | newest-cni-693564            | jenkins | v1.32.0 | 27 Nov 23 11:49 UTC | 27 Nov 23 11:49 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-693564 --memory=2200 --alsologtostderr   | newest-cni-693564            | jenkins | v1.32.0 | 27 Nov 23 11:49 UTC | 27 Nov 23 11:50 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --kubernetes-version=v1.28.4            |                              |         |         |                     |                     |
	| ssh     | -p newest-cni-693564 sudo                              | newest-cni-693564            | jenkins | v1.32.0 | 27 Nov 23 11:50 UTC | 27 Nov 23 11:50 UTC |
	|         | crictl images -o json                                  |                              |         |         |                     |                     |
	| pause   | -p newest-cni-693564                                   | newest-cni-693564            | jenkins | v1.32.0 | 27 Nov 23 11:50 UTC | 27 Nov 23 11:50 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-693564                                   | newest-cni-693564            | jenkins | v1.32.0 | 27 Nov 23 11:50 UTC | 27 Nov 23 11:50 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-693564                                   | newest-cni-693564            | jenkins | v1.32.0 | 27 Nov 23 11:50 UTC | 27 Nov 23 11:50 UTC |
	| delete  | -p newest-cni-693564                                   | newest-cni-693564            | jenkins | v1.32.0 | 27 Nov 23 11:50 UTC | 27 Nov 23 11:50 UTC |
	| ssh     | -p no-preload-822966 sudo                              | no-preload-822966            | jenkins | v1.32.0 | 27 Nov 23 11:52 UTC | 27 Nov 23 11:52 UTC |
	|         | crictl images -o json                                  |                              |         |         |                     |                     |
	| pause   | -p no-preload-822966                                   | no-preload-822966            | jenkins | v1.32.0 | 27 Nov 23 11:52 UTC | 27 Nov 23 11:52 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p no-preload-822966                                   | no-preload-822966            | jenkins | v1.32.0 | 27 Nov 23 11:52 UTC | 27 Nov 23 11:52 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p no-preload-822966                                   | no-preload-822966            | jenkins | v1.32.0 | 27 Nov 23 11:52 UTC | 27 Nov 23 11:52 UTC |
	| delete  | -p no-preload-822966                                   | no-preload-822966            | jenkins | v1.32.0 | 27 Nov 23 11:52 UTC | 27 Nov 23 11:52 UTC |
	| ssh     | -p                                                     | default-k8s-diff-port-028212 | jenkins | v1.32.0 | 27 Nov 23 11:53 UTC | 27 Nov 23 11:53 UTC |
	|         | default-k8s-diff-port-028212                           |                              |         |         |                     |                     |
	|         | sudo crictl images -o json                             |                              |         |         |                     |                     |
	| pause   | -p                                                     | default-k8s-diff-port-028212 | jenkins | v1.32.0 | 27 Nov 23 11:53 UTC | 27 Nov 23 11:53 UTC |
	|         | default-k8s-diff-port-028212                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p                                                     | default-k8s-diff-port-028212 | jenkins | v1.32.0 | 27 Nov 23 11:53 UTC | 27 Nov 23 11:53 UTC |
	|         | default-k8s-diff-port-028212                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-028212 | jenkins | v1.32.0 | 27 Nov 23 11:53 UTC | 27 Nov 23 11:53 UTC |
	|         | default-k8s-diff-port-028212                           |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-028212 | jenkins | v1.32.0 | 27 Nov 23 11:53 UTC | 27 Nov 23 11:53 UTC |
	|         | default-k8s-diff-port-028212                           |                              |         |         |                     |                     |
	| ssh     | -p old-k8s-version-337707 sudo                         | old-k8s-version-337707       | jenkins | v1.32.0 | 27 Nov 23 11:55 UTC |                     |
	|         | crictl images -o json                                  |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/27 11:49:47
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.21.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1127 11:49:47.326054  176850 out.go:296] Setting OutFile to fd 1 ...
	I1127 11:49:47.326310  176850 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1127 11:49:47.326318  176850 out.go:309] Setting ErrFile to fd 2...
	I1127 11:49:47.326323  176850 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1127 11:49:47.326498  176850 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17644-122411/.minikube/bin
	I1127 11:49:47.327041  176850 out.go:303] Setting JSON to false
	I1127 11:49:47.328118  176850 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":5539,"bootTime":1701080249,"procs":248,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1127 11:49:47.328179  176850 start.go:138] virtualization: kvm guest
	I1127 11:49:47.330416  176850 out.go:177] * [newest-cni-693564] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1127 11:49:47.331818  176850 out.go:177]   - MINIKUBE_LOCATION=17644
	I1127 11:49:47.331823  176850 notify.go:220] Checking for updates...
	I1127 11:49:47.333173  176850 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1127 11:49:47.334571  176850 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17644-122411/kubeconfig
	I1127 11:49:47.335812  176850 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17644-122411/.minikube
	I1127 11:49:47.337166  176850 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1127 11:49:47.338427  176850 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1127 11:49:47.340156  176850 config.go:182] Loaded profile config "newest-cni-693564": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1127 11:49:47.340606  176850 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1127 11:49:47.340654  176850 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 11:49:47.356767  176850 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41335
	I1127 11:49:47.357282  176850 main.go:141] libmachine: () Calling .GetVersion
	I1127 11:49:47.357908  176850 main.go:141] libmachine: Using API Version  1
	I1127 11:49:47.357934  176850 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 11:49:47.358314  176850 main.go:141] libmachine: () Calling .GetMachineName
	I1127 11:49:47.358497  176850 main.go:141] libmachine: (newest-cni-693564) Calling .DriverName
	I1127 11:49:47.358760  176850 driver.go:378] Setting default libvirt URI to qemu:///system
	I1127 11:49:47.359210  176850 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1127 11:49:47.359262  176850 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 11:49:47.373894  176850 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34013
	I1127 11:49:47.374356  176850 main.go:141] libmachine: () Calling .GetVersion
	I1127 11:49:47.374845  176850 main.go:141] libmachine: Using API Version  1
	I1127 11:49:47.374879  176850 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 11:49:47.375196  176850 main.go:141] libmachine: () Calling .GetMachineName
	I1127 11:49:47.375379  176850 main.go:141] libmachine: (newest-cni-693564) Calling .DriverName
	I1127 11:49:47.412123  176850 out.go:177] * Using the kvm2 driver based on existing profile
	I1127 11:49:47.413368  176850 start.go:298] selected driver: kvm2
	I1127 11:49:47.413378  176850 start.go:902] validating driver "kvm2" against &{Name:newest-cni-693564 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17634/minikube-v1.32.1-1700142131-17634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:newest-cni-693564 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.37 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:
false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1127 11:49:47.413458  176850 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1127 11:49:47.414076  176850 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1127 11:49:47.414163  176850 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17644-122411/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1127 11:49:47.429204  176850 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1127 11:49:47.429750  176850 start_flags.go:950] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1127 11:49:47.429840  176850 cni.go:84] Creating CNI manager for ""
	I1127 11:49:47.429862  176850 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1127 11:49:47.429882  176850 start_flags.go:323] config:
	{Name:newest-cni-693564 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17634/minikube-v1.32.1-1700142131-17634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:newest-cni-693564 Namespace:default AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.37 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts
:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1127 11:49:47.430124  176850 iso.go:125] acquiring lock: {Name:mk7a2a8e57d33d30020383e75b407d4341747681 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1127 11:49:47.432329  176850 out.go:177] * Starting control plane node newest-cni-693564 in cluster newest-cni-693564
	I1127 11:49:47.433784  176850 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1127 11:49:47.433817  176850 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17644-122411/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4
	I1127 11:49:47.433823  176850 cache.go:56] Caching tarball of preloaded images
	I1127 11:49:47.433928  176850 preload.go:174] Found /home/jenkins/minikube-integration/17644-122411/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1127 11:49:47.433944  176850 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I1127 11:49:47.434068  176850 profile.go:148] Saving config to /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/newest-cni-693564/config.json ...
	I1127 11:49:47.434285  176850 start.go:365] acquiring machines lock for newest-cni-693564: {Name:mkfbf5a28821d500d0d8d1f07fcf8da9a205c742 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1127 11:49:47.434332  176850 start.go:369] acquired machines lock for "newest-cni-693564" in 27.069µs
	I1127 11:49:47.434352  176850 start.go:96] Skipping create...Using existing machine configuration
	I1127 11:49:47.434361  176850 fix.go:54] fixHost starting: 
	I1127 11:49:47.434646  176850 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1127 11:49:47.434671  176850 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 11:49:47.448692  176850 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34737
	I1127 11:49:47.449153  176850 main.go:141] libmachine: () Calling .GetVersion
	I1127 11:49:47.449617  176850 main.go:141] libmachine: Using API Version  1
	I1127 11:49:47.449653  176850 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 11:49:47.449971  176850 main.go:141] libmachine: () Calling .GetMachineName
	I1127 11:49:47.450174  176850 main.go:141] libmachine: (newest-cni-693564) Calling .DriverName
	I1127 11:49:47.450382  176850 main.go:141] libmachine: (newest-cni-693564) Calling .GetState
	I1127 11:49:47.452004  176850 fix.go:102] recreateIfNeeded on newest-cni-693564: state=Stopped err=<nil>
	I1127 11:49:47.452024  176850 main.go:141] libmachine: (newest-cni-693564) Calling .DriverName
	W1127 11:49:47.452190  176850 fix.go:128] unexpected machine state, will restart: <nil>
	I1127 11:49:47.454206  176850 out.go:177] * Restarting existing kvm2 VM for "newest-cni-693564" ...
	I1127 11:49:47.007455  174671 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fmxzz" in "kube-system" namespace has status "Ready":"False"
	I1127 11:49:49.010113  174671 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fmxzz" in "kube-system" namespace has status "Ready":"False"
	I1127 11:49:45.979076  175460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbpl7" in "kube-system" namespace has status "Ready":"False"
	I1127 11:49:48.481713  175460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbpl7" in "kube-system" namespace has status "Ready":"False"
	I1127 11:49:47.255594  175050 pod_ready.go:102] pod "metrics-server-74d5856cc6-frk5n" in "kube-system" namespace has status "Ready":"False"
	I1127 11:49:49.255811  175050 pod_ready.go:102] pod "metrics-server-74d5856cc6-frk5n" in "kube-system" namespace has status "Ready":"False"
	I1127 11:49:51.256596  175050 pod_ready.go:102] pod "metrics-server-74d5856cc6-frk5n" in "kube-system" namespace has status "Ready":"False"
	I1127 11:49:47.455661  176850 main.go:141] libmachine: (newest-cni-693564) Calling .Start
	I1127 11:49:47.455857  176850 main.go:141] libmachine: (newest-cni-693564) Ensuring networks are active...
	I1127 11:49:47.456530  176850 main.go:141] libmachine: (newest-cni-693564) Ensuring network default is active
	I1127 11:49:47.457040  176850 main.go:141] libmachine: (newest-cni-693564) Ensuring network mk-newest-cni-693564 is active
	I1127 11:49:47.457509  176850 main.go:141] libmachine: (newest-cni-693564) Getting domain xml...
	I1127 11:49:47.458289  176850 main.go:141] libmachine: (newest-cni-693564) Creating domain...
	I1127 11:49:48.740490  176850 main.go:141] libmachine: (newest-cni-693564) Waiting to get IP...
	I1127 11:49:48.741542  176850 main.go:141] libmachine: (newest-cni-693564) DBG | domain newest-cni-693564 has defined MAC address 52:54:00:f5:73:ab in network mk-newest-cni-693564
	I1127 11:49:48.742032  176850 main.go:141] libmachine: (newest-cni-693564) DBG | unable to find current IP address of domain newest-cni-693564 in network mk-newest-cni-693564
	I1127 11:49:48.742101  176850 main.go:141] libmachine: (newest-cni-693564) DBG | I1127 11:49:48.742014  176885 retry.go:31] will retry after 227.905382ms: waiting for machine to come up
	I1127 11:49:48.971755  176850 main.go:141] libmachine: (newest-cni-693564) DBG | domain newest-cni-693564 has defined MAC address 52:54:00:f5:73:ab in network mk-newest-cni-693564
	I1127 11:49:48.972387  176850 main.go:141] libmachine: (newest-cni-693564) DBG | unable to find current IP address of domain newest-cni-693564 in network mk-newest-cni-693564
	I1127 11:49:48.972414  176850 main.go:141] libmachine: (newest-cni-693564) DBG | I1127 11:49:48.972317  176885 retry.go:31] will retry after 296.890704ms: waiting for machine to come up
	I1127 11:49:49.270903  176850 main.go:141] libmachine: (newest-cni-693564) DBG | domain newest-cni-693564 has defined MAC address 52:54:00:f5:73:ab in network mk-newest-cni-693564
	I1127 11:49:49.271404  176850 main.go:141] libmachine: (newest-cni-693564) DBG | unable to find current IP address of domain newest-cni-693564 in network mk-newest-cni-693564
	I1127 11:49:49.271437  176850 main.go:141] libmachine: (newest-cni-693564) DBG | I1127 11:49:49.271357  176885 retry.go:31] will retry after 487.261878ms: waiting for machine to come up
	I1127 11:49:49.759973  176850 main.go:141] libmachine: (newest-cni-693564) DBG | domain newest-cni-693564 has defined MAC address 52:54:00:f5:73:ab in network mk-newest-cni-693564
	I1127 11:49:49.760552  176850 main.go:141] libmachine: (newest-cni-693564) DBG | unable to find current IP address of domain newest-cni-693564 in network mk-newest-cni-693564
	I1127 11:49:49.760584  176850 main.go:141] libmachine: (newest-cni-693564) DBG | I1127 11:49:49.760504  176885 retry.go:31] will retry after 384.39289ms: waiting for machine to come up
	I1127 11:49:50.146142  176850 main.go:141] libmachine: (newest-cni-693564) DBG | domain newest-cni-693564 has defined MAC address 52:54:00:f5:73:ab in network mk-newest-cni-693564
	I1127 11:49:50.146782  176850 main.go:141] libmachine: (newest-cni-693564) DBG | unable to find current IP address of domain newest-cni-693564 in network mk-newest-cni-693564
	I1127 11:49:50.146816  176850 main.go:141] libmachine: (newest-cni-693564) DBG | I1127 11:49:50.146727  176885 retry.go:31] will retry after 472.506254ms: waiting for machine to come up
	I1127 11:49:50.621287  176850 main.go:141] libmachine: (newest-cni-693564) DBG | domain newest-cni-693564 has defined MAC address 52:54:00:f5:73:ab in network mk-newest-cni-693564
	I1127 11:49:50.621797  176850 main.go:141] libmachine: (newest-cni-693564) DBG | unable to find current IP address of domain newest-cni-693564 in network mk-newest-cni-693564
	I1127 11:49:50.621829  176850 main.go:141] libmachine: (newest-cni-693564) DBG | I1127 11:49:50.621725  176885 retry.go:31] will retry after 761.385293ms: waiting for machine to come up
	I1127 11:49:51.384160  176850 main.go:141] libmachine: (newest-cni-693564) DBG | domain newest-cni-693564 has defined MAC address 52:54:00:f5:73:ab in network mk-newest-cni-693564
	I1127 11:49:51.384707  176850 main.go:141] libmachine: (newest-cni-693564) DBG | unable to find current IP address of domain newest-cni-693564 in network mk-newest-cni-693564
	I1127 11:49:51.384733  176850 main.go:141] libmachine: (newest-cni-693564) DBG | I1127 11:49:51.384662  176885 retry.go:31] will retry after 858.221855ms: waiting for machine to come up
	I1127 11:49:52.244373  176850 main.go:141] libmachine: (newest-cni-693564) DBG | domain newest-cni-693564 has defined MAC address 52:54:00:f5:73:ab in network mk-newest-cni-693564
	I1127 11:49:52.245006  176850 main.go:141] libmachine: (newest-cni-693564) DBG | unable to find current IP address of domain newest-cni-693564 in network mk-newest-cni-693564
	I1127 11:49:52.245040  176850 main.go:141] libmachine: (newest-cni-693564) DBG | I1127 11:49:52.244934  176885 retry.go:31] will retry after 1.32793588s: waiting for machine to come up
	I1127 11:49:51.507340  174671 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fmxzz" in "kube-system" namespace has status "Ready":"False"
	I1127 11:49:53.508798  174671 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fmxzz" in "kube-system" namespace has status "Ready":"False"
	I1127 11:49:50.978166  175460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbpl7" in "kube-system" namespace has status "Ready":"False"
	I1127 11:49:53.478906  175460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbpl7" in "kube-system" namespace has status "Ready":"False"
	I1127 11:49:53.755100  175050 pod_ready.go:102] pod "metrics-server-74d5856cc6-frk5n" in "kube-system" namespace has status "Ready":"False"
	I1127 11:49:56.257119  175050 pod_ready.go:102] pod "metrics-server-74d5856cc6-frk5n" in "kube-system" namespace has status "Ready":"False"
	I1127 11:49:53.574765  176850 main.go:141] libmachine: (newest-cni-693564) DBG | domain newest-cni-693564 has defined MAC address 52:54:00:f5:73:ab in network mk-newest-cni-693564
	I1127 11:49:53.575417  176850 main.go:141] libmachine: (newest-cni-693564) DBG | unable to find current IP address of domain newest-cni-693564 in network mk-newest-cni-693564
	I1127 11:49:53.575457  176850 main.go:141] libmachine: (newest-cni-693564) DBG | I1127 11:49:53.575357  176885 retry.go:31] will retry after 1.688789324s: waiting for machine to come up
	I1127 11:49:55.265946  176850 main.go:141] libmachine: (newest-cni-693564) DBG | domain newest-cni-693564 has defined MAC address 52:54:00:f5:73:ab in network mk-newest-cni-693564
	I1127 11:49:55.266502  176850 main.go:141] libmachine: (newest-cni-693564) DBG | unable to find current IP address of domain newest-cni-693564 in network mk-newest-cni-693564
	I1127 11:49:55.266534  176850 main.go:141] libmachine: (newest-cni-693564) DBG | I1127 11:49:55.266450  176885 retry.go:31] will retry after 1.587137988s: waiting for machine to come up
	I1127 11:49:56.856224  176850 main.go:141] libmachine: (newest-cni-693564) DBG | domain newest-cni-693564 has defined MAC address 52:54:00:f5:73:ab in network mk-newest-cni-693564
	I1127 11:49:56.856791  176850 main.go:141] libmachine: (newest-cni-693564) DBG | unable to find current IP address of domain newest-cni-693564 in network mk-newest-cni-693564
	I1127 11:49:56.856818  176850 main.go:141] libmachine: (newest-cni-693564) DBG | I1127 11:49:56.856737  176885 retry.go:31] will retry after 2.797559181s: waiting for machine to come up
	I1127 11:49:56.009544  174671 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fmxzz" in "kube-system" namespace has status "Ready":"False"
	I1127 11:49:58.509495  174671 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fmxzz" in "kube-system" namespace has status "Ready":"False"
	I1127 11:49:55.479218  175460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbpl7" in "kube-system" namespace has status "Ready":"False"
	I1127 11:49:57.979484  175460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbpl7" in "kube-system" namespace has status "Ready":"False"
	I1127 11:49:59.980286  175460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbpl7" in "kube-system" namespace has status "Ready":"False"
	I1127 11:49:58.756735  175050 pod_ready.go:102] pod "metrics-server-74d5856cc6-frk5n" in "kube-system" namespace has status "Ready":"False"
	I1127 11:50:01.255870  175050 pod_ready.go:102] pod "metrics-server-74d5856cc6-frk5n" in "kube-system" namespace has status "Ready":"False"
	I1127 11:49:59.656365  176850 main.go:141] libmachine: (newest-cni-693564) DBG | domain newest-cni-693564 has defined MAC address 52:54:00:f5:73:ab in network mk-newest-cni-693564
	I1127 11:49:59.656919  176850 main.go:141] libmachine: (newest-cni-693564) DBG | unable to find current IP address of domain newest-cni-693564 in network mk-newest-cni-693564
	I1127 11:49:59.656957  176850 main.go:141] libmachine: (newest-cni-693564) DBG | I1127 11:49:59.656852  176885 retry.go:31] will retry after 3.298233191s: waiting for machine to come up
	I1127 11:50:01.010940  174671 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fmxzz" in "kube-system" namespace has status "Ready":"False"
	I1127 11:50:03.507394  174671 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fmxzz" in "kube-system" namespace has status "Ready":"False"
	I1127 11:50:02.476883  175460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbpl7" in "kube-system" namespace has status "Ready":"False"
	I1127 11:50:04.481939  175460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbpl7" in "kube-system" namespace has status "Ready":"False"
	I1127 11:50:03.256421  175050 pod_ready.go:102] pod "metrics-server-74d5856cc6-frk5n" in "kube-system" namespace has status "Ready":"False"
	I1127 11:50:05.755606  175050 pod_ready.go:102] pod "metrics-server-74d5856cc6-frk5n" in "kube-system" namespace has status "Ready":"False"
	I1127 11:50:02.956881  176850 main.go:141] libmachine: (newest-cni-693564) DBG | domain newest-cni-693564 has defined MAC address 52:54:00:f5:73:ab in network mk-newest-cni-693564
	I1127 11:50:02.957363  176850 main.go:141] libmachine: (newest-cni-693564) DBG | unable to find current IP address of domain newest-cni-693564 in network mk-newest-cni-693564
	I1127 11:50:02.957387  176850 main.go:141] libmachine: (newest-cni-693564) DBG | I1127 11:50:02.957304  176885 retry.go:31] will retry after 3.891074905s: waiting for machine to come up
	I1127 11:50:06.853270  176850 main.go:141] libmachine: (newest-cni-693564) DBG | domain newest-cni-693564 has defined MAC address 52:54:00:f5:73:ab in network mk-newest-cni-693564
	I1127 11:50:06.853880  176850 main.go:141] libmachine: (newest-cni-693564) DBG | domain newest-cni-693564 has current primary IP address 192.168.72.37 and MAC address 52:54:00:f5:73:ab in network mk-newest-cni-693564
	I1127 11:50:06.853895  176850 main.go:141] libmachine: (newest-cni-693564) Found IP for machine: 192.168.72.37
	I1127 11:50:06.853905  176850 main.go:141] libmachine: (newest-cni-693564) Reserving static IP address...
	I1127 11:50:06.854320  176850 main.go:141] libmachine: (newest-cni-693564) DBG | found host DHCP lease matching {name: "newest-cni-693564", mac: "52:54:00:f5:73:ab", ip: "192.168.72.37"} in network mk-newest-cni-693564: {Iface:virbr4 ExpiryTime:2023-11-27 12:50:00 +0000 UTC Type:0 Mac:52:54:00:f5:73:ab Iaid: IPaddr:192.168.72.37 Prefix:24 Hostname:newest-cni-693564 Clientid:01:52:54:00:f5:73:ab}
	I1127 11:50:06.854361  176850 main.go:141] libmachine: (newest-cni-693564) DBG | skip adding static IP to network mk-newest-cni-693564 - found existing host DHCP lease matching {name: "newest-cni-693564", mac: "52:54:00:f5:73:ab", ip: "192.168.72.37"}
	I1127 11:50:06.854378  176850 main.go:141] libmachine: (newest-cni-693564) Reserved static IP address: 192.168.72.37
	I1127 11:50:06.854396  176850 main.go:141] libmachine: (newest-cni-693564) Waiting for SSH to be available...
	I1127 11:50:06.854412  176850 main.go:141] libmachine: (newest-cni-693564) DBG | Getting to WaitForSSH function...
	I1127 11:50:06.856713  176850 main.go:141] libmachine: (newest-cni-693564) DBG | domain newest-cni-693564 has defined MAC address 52:54:00:f5:73:ab in network mk-newest-cni-693564
	I1127 11:50:06.857014  176850 main.go:141] libmachine: (newest-cni-693564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:73:ab", ip: ""} in network mk-newest-cni-693564: {Iface:virbr4 ExpiryTime:2023-11-27 12:50:00 +0000 UTC Type:0 Mac:52:54:00:f5:73:ab Iaid: IPaddr:192.168.72.37 Prefix:24 Hostname:newest-cni-693564 Clientid:01:52:54:00:f5:73:ab}
	I1127 11:50:06.857052  176850 main.go:141] libmachine: (newest-cni-693564) DBG | domain newest-cni-693564 has defined IP address 192.168.72.37 and MAC address 52:54:00:f5:73:ab in network mk-newest-cni-693564
	I1127 11:50:06.857204  176850 main.go:141] libmachine: (newest-cni-693564) DBG | Using SSH client type: external
	I1127 11:50:06.857224  176850 main.go:141] libmachine: (newest-cni-693564) DBG | Using SSH private key: /home/jenkins/minikube-integration/17644-122411/.minikube/machines/newest-cni-693564/id_rsa (-rw-------)
	I1127 11:50:06.857253  176850 main.go:141] libmachine: (newest-cni-693564) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.37 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17644-122411/.minikube/machines/newest-cni-693564/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1127 11:50:06.857268  176850 main.go:141] libmachine: (newest-cni-693564) DBG | About to run SSH command:
	I1127 11:50:06.857281  176850 main.go:141] libmachine: (newest-cni-693564) DBG | exit 0
	I1127 11:50:06.955076  176850 main.go:141] libmachine: (newest-cni-693564) DBG | SSH cmd err, output: <nil>: 
	I1127 11:50:06.955489  176850 main.go:141] libmachine: (newest-cni-693564) Calling .GetConfigRaw
	I1127 11:50:06.956109  176850 main.go:141] libmachine: (newest-cni-693564) Calling .GetIP
	I1127 11:50:06.959249  176850 main.go:141] libmachine: (newest-cni-693564) DBG | domain newest-cni-693564 has defined MAC address 52:54:00:f5:73:ab in network mk-newest-cni-693564
	I1127 11:50:06.959653  176850 main.go:141] libmachine: (newest-cni-693564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:73:ab", ip: ""} in network mk-newest-cni-693564: {Iface:virbr4 ExpiryTime:2023-11-27 12:50:00 +0000 UTC Type:0 Mac:52:54:00:f5:73:ab Iaid: IPaddr:192.168.72.37 Prefix:24 Hostname:newest-cni-693564 Clientid:01:52:54:00:f5:73:ab}
	I1127 11:50:06.959684  176850 main.go:141] libmachine: (newest-cni-693564) DBG | domain newest-cni-693564 has defined IP address 192.168.72.37 and MAC address 52:54:00:f5:73:ab in network mk-newest-cni-693564
	I1127 11:50:06.959891  176850 profile.go:148] Saving config to /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/newest-cni-693564/config.json ...
	I1127 11:50:06.960102  176850 machine.go:88] provisioning docker machine ...
	I1127 11:50:06.960123  176850 main.go:141] libmachine: (newest-cni-693564) Calling .DriverName
	I1127 11:50:06.960340  176850 main.go:141] libmachine: (newest-cni-693564) Calling .GetMachineName
	I1127 11:50:06.960488  176850 buildroot.go:166] provisioning hostname "newest-cni-693564"
	I1127 11:50:06.960506  176850 main.go:141] libmachine: (newest-cni-693564) Calling .GetMachineName
	I1127 11:50:06.960644  176850 main.go:141] libmachine: (newest-cni-693564) Calling .GetSSHHostname
	I1127 11:50:06.962924  176850 main.go:141] libmachine: (newest-cni-693564) DBG | domain newest-cni-693564 has defined MAC address 52:54:00:f5:73:ab in network mk-newest-cni-693564
	I1127 11:50:06.963332  176850 main.go:141] libmachine: (newest-cni-693564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:73:ab", ip: ""} in network mk-newest-cni-693564: {Iface:virbr4 ExpiryTime:2023-11-27 12:50:00 +0000 UTC Type:0 Mac:52:54:00:f5:73:ab Iaid: IPaddr:192.168.72.37 Prefix:24 Hostname:newest-cni-693564 Clientid:01:52:54:00:f5:73:ab}
	I1127 11:50:06.963380  176850 main.go:141] libmachine: (newest-cni-693564) DBG | domain newest-cni-693564 has defined IP address 192.168.72.37 and MAC address 52:54:00:f5:73:ab in network mk-newest-cni-693564
	I1127 11:50:06.963591  176850 main.go:141] libmachine: (newest-cni-693564) Calling .GetSSHPort
	I1127 11:50:06.963795  176850 main.go:141] libmachine: (newest-cni-693564) Calling .GetSSHKeyPath
	I1127 11:50:06.963944  176850 main.go:141] libmachine: (newest-cni-693564) Calling .GetSSHKeyPath
	I1127 11:50:06.964111  176850 main.go:141] libmachine: (newest-cni-693564) Calling .GetSSHUsername
	I1127 11:50:06.964272  176850 main.go:141] libmachine: Using SSH client type: native
	I1127 11:50:06.964625  176850 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808940] 0x80b620 <nil>  [] 0s} 192.168.72.37 22 <nil> <nil>}
	I1127 11:50:06.964643  176850 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-693564 && echo "newest-cni-693564" | sudo tee /etc/hostname
	I1127 11:50:07.102565  176850 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-693564
	
	I1127 11:50:07.102592  176850 main.go:141] libmachine: (newest-cni-693564) Calling .GetSSHHostname
	I1127 11:50:07.105609  176850 main.go:141] libmachine: (newest-cni-693564) DBG | domain newest-cni-693564 has defined MAC address 52:54:00:f5:73:ab in network mk-newest-cni-693564
	I1127 11:50:07.106033  176850 main.go:141] libmachine: (newest-cni-693564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:73:ab", ip: ""} in network mk-newest-cni-693564: {Iface:virbr4 ExpiryTime:2023-11-27 12:50:00 +0000 UTC Type:0 Mac:52:54:00:f5:73:ab Iaid: IPaddr:192.168.72.37 Prefix:24 Hostname:newest-cni-693564 Clientid:01:52:54:00:f5:73:ab}
	I1127 11:50:07.106055  176850 main.go:141] libmachine: (newest-cni-693564) DBG | domain newest-cni-693564 has defined IP address 192.168.72.37 and MAC address 52:54:00:f5:73:ab in network mk-newest-cni-693564
	I1127 11:50:07.106245  176850 main.go:141] libmachine: (newest-cni-693564) Calling .GetSSHPort
	I1127 11:50:07.106481  176850 main.go:141] libmachine: (newest-cni-693564) Calling .GetSSHKeyPath
	I1127 11:50:07.106658  176850 main.go:141] libmachine: (newest-cni-693564) Calling .GetSSHKeyPath
	I1127 11:50:07.106793  176850 main.go:141] libmachine: (newest-cni-693564) Calling .GetSSHUsername
	I1127 11:50:07.106997  176850 main.go:141] libmachine: Using SSH client type: native
	I1127 11:50:07.107450  176850 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808940] 0x80b620 <nil>  [] 0s} 192.168.72.37 22 <nil> <nil>}
	I1127 11:50:07.107481  176850 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-693564' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-693564/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-693564' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1127 11:50:07.239908  176850 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1127 11:50:07.239949  176850 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17644-122411/.minikube CaCertPath:/home/jenkins/minikube-integration/17644-122411/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17644-122411/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17644-122411/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17644-122411/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17644-122411/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17644-122411/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17644-122411/.minikube}
	I1127 11:50:07.239977  176850 buildroot.go:174] setting up certificates
	I1127 11:50:07.239991  176850 provision.go:83] configureAuth start
	I1127 11:50:07.240005  176850 main.go:141] libmachine: (newest-cni-693564) Calling .GetMachineName
	I1127 11:50:07.240312  176850 main.go:141] libmachine: (newest-cni-693564) Calling .GetIP
	I1127 11:50:07.242905  176850 main.go:141] libmachine: (newest-cni-693564) DBG | domain newest-cni-693564 has defined MAC address 52:54:00:f5:73:ab in network mk-newest-cni-693564
	I1127 11:50:07.243338  176850 main.go:141] libmachine: (newest-cni-693564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:73:ab", ip: ""} in network mk-newest-cni-693564: {Iface:virbr4 ExpiryTime:2023-11-27 12:50:00 +0000 UTC Type:0 Mac:52:54:00:f5:73:ab Iaid: IPaddr:192.168.72.37 Prefix:24 Hostname:newest-cni-693564 Clientid:01:52:54:00:f5:73:ab}
	I1127 11:50:07.243370  176850 main.go:141] libmachine: (newest-cni-693564) DBG | domain newest-cni-693564 has defined IP address 192.168.72.37 and MAC address 52:54:00:f5:73:ab in network mk-newest-cni-693564
	I1127 11:50:07.243515  176850 main.go:141] libmachine: (newest-cni-693564) Calling .GetSSHHostname
	I1127 11:50:07.245659  176850 main.go:141] libmachine: (newest-cni-693564) DBG | domain newest-cni-693564 has defined MAC address 52:54:00:f5:73:ab in network mk-newest-cni-693564
	I1127 11:50:07.245988  176850 main.go:141] libmachine: (newest-cni-693564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:73:ab", ip: ""} in network mk-newest-cni-693564: {Iface:virbr4 ExpiryTime:2023-11-27 12:50:00 +0000 UTC Type:0 Mac:52:54:00:f5:73:ab Iaid: IPaddr:192.168.72.37 Prefix:24 Hostname:newest-cni-693564 Clientid:01:52:54:00:f5:73:ab}
	I1127 11:50:07.246018  176850 main.go:141] libmachine: (newest-cni-693564) DBG | domain newest-cni-693564 has defined IP address 192.168.72.37 and MAC address 52:54:00:f5:73:ab in network mk-newest-cni-693564
	I1127 11:50:07.246182  176850 provision.go:138] copyHostCerts
	I1127 11:50:07.246255  176850 exec_runner.go:144] found /home/jenkins/minikube-integration/17644-122411/.minikube/ca.pem, removing ...
	I1127 11:50:07.246266  176850 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17644-122411/.minikube/ca.pem
	I1127 11:50:07.246338  176850 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17644-122411/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17644-122411/.minikube/ca.pem (1078 bytes)
	I1127 11:50:07.246443  176850 exec_runner.go:144] found /home/jenkins/minikube-integration/17644-122411/.minikube/cert.pem, removing ...
	I1127 11:50:07.246454  176850 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17644-122411/.minikube/cert.pem
	I1127 11:50:07.246479  176850 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17644-122411/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17644-122411/.minikube/cert.pem (1123 bytes)
	I1127 11:50:07.246540  176850 exec_runner.go:144] found /home/jenkins/minikube-integration/17644-122411/.minikube/key.pem, removing ...
	I1127 11:50:07.246550  176850 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17644-122411/.minikube/key.pem
	I1127 11:50:07.246572  176850 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17644-122411/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17644-122411/.minikube/key.pem (1679 bytes)
	I1127 11:50:07.246628  176850 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17644-122411/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17644-122411/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17644-122411/.minikube/certs/ca-key.pem org=jenkins.newest-cni-693564 san=[192.168.72.37 192.168.72.37 localhost 127.0.0.1 minikube newest-cni-693564]
	I1127 11:50:05.508650  174671 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fmxzz" in "kube-system" namespace has status "Ready":"False"
	I1127 11:50:08.008673  174671 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fmxzz" in "kube-system" namespace has status "Ready":"False"
	I1127 11:50:10.008738  174671 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fmxzz" in "kube-system" namespace has status "Ready":"False"
	I1127 11:50:06.979054  175460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbpl7" in "kube-system" namespace has status "Ready":"False"
	I1127 11:50:08.981820  175460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbpl7" in "kube-system" namespace has status "Ready":"False"
	I1127 11:50:07.758078  175050 pod_ready.go:102] pod "metrics-server-74d5856cc6-frk5n" in "kube-system" namespace has status "Ready":"False"
	I1127 11:50:10.255016  175050 pod_ready.go:102] pod "metrics-server-74d5856cc6-frk5n" in "kube-system" namespace has status "Ready":"False"
	I1127 11:50:07.456792  176850 provision.go:172] copyRemoteCerts
	I1127 11:50:07.456850  176850 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1127 11:50:07.456889  176850 main.go:141] libmachine: (newest-cni-693564) Calling .GetSSHHostname
	I1127 11:50:07.459582  176850 main.go:141] libmachine: (newest-cni-693564) DBG | domain newest-cni-693564 has defined MAC address 52:54:00:f5:73:ab in network mk-newest-cni-693564
	I1127 11:50:07.459914  176850 main.go:141] libmachine: (newest-cni-693564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:73:ab", ip: ""} in network mk-newest-cni-693564: {Iface:virbr4 ExpiryTime:2023-11-27 12:50:00 +0000 UTC Type:0 Mac:52:54:00:f5:73:ab Iaid: IPaddr:192.168.72.37 Prefix:24 Hostname:newest-cni-693564 Clientid:01:52:54:00:f5:73:ab}
	I1127 11:50:07.459944  176850 main.go:141] libmachine: (newest-cni-693564) DBG | domain newest-cni-693564 has defined IP address 192.168.72.37 and MAC address 52:54:00:f5:73:ab in network mk-newest-cni-693564
	I1127 11:50:07.460086  176850 main.go:141] libmachine: (newest-cni-693564) Calling .GetSSHPort
	I1127 11:50:07.460339  176850 main.go:141] libmachine: (newest-cni-693564) Calling .GetSSHKeyPath
	I1127 11:50:07.460531  176850 main.go:141] libmachine: (newest-cni-693564) Calling .GetSSHUsername
	I1127 11:50:07.460664  176850 sshutil.go:53] new ssh client: &{IP:192.168.72.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17644-122411/.minikube/machines/newest-cni-693564/id_rsa Username:docker}
	I1127 11:50:07.552922  176850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17644-122411/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1127 11:50:07.576149  176850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17644-122411/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1127 11:50:07.598841  176850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17644-122411/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1127 11:50:07.621680  176850 provision.go:86] duration metric: configureAuth took 381.673091ms
	I1127 11:50:07.621709  176850 buildroot.go:189] setting minikube options for container-runtime
	I1127 11:50:07.621981  176850 config.go:182] Loaded profile config "newest-cni-693564": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1127 11:50:07.622016  176850 main.go:141] libmachine: (newest-cni-693564) Calling .DriverName
	I1127 11:50:07.622295  176850 main.go:141] libmachine: (newest-cni-693564) Calling .GetSSHHostname
	I1127 11:50:07.624843  176850 main.go:141] libmachine: (newest-cni-693564) DBG | domain newest-cni-693564 has defined MAC address 52:54:00:f5:73:ab in network mk-newest-cni-693564
	I1127 11:50:07.625248  176850 main.go:141] libmachine: (newest-cni-693564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:73:ab", ip: ""} in network mk-newest-cni-693564: {Iface:virbr4 ExpiryTime:2023-11-27 12:50:00 +0000 UTC Type:0 Mac:52:54:00:f5:73:ab Iaid: IPaddr:192.168.72.37 Prefix:24 Hostname:newest-cni-693564 Clientid:01:52:54:00:f5:73:ab}
	I1127 11:50:07.625274  176850 main.go:141] libmachine: (newest-cni-693564) DBG | domain newest-cni-693564 has defined IP address 192.168.72.37 and MAC address 52:54:00:f5:73:ab in network mk-newest-cni-693564
	I1127 11:50:07.625499  176850 main.go:141] libmachine: (newest-cni-693564) Calling .GetSSHPort
	I1127 11:50:07.625657  176850 main.go:141] libmachine: (newest-cni-693564) Calling .GetSSHKeyPath
	I1127 11:50:07.625822  176850 main.go:141] libmachine: (newest-cni-693564) Calling .GetSSHKeyPath
	I1127 11:50:07.626013  176850 main.go:141] libmachine: (newest-cni-693564) Calling .GetSSHUsername
	I1127 11:50:07.626182  176850 main.go:141] libmachine: Using SSH client type: native
	I1127 11:50:07.626473  176850 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808940] 0x80b620 <nil>  [] 0s} 192.168.72.37 22 <nil> <nil>}
	I1127 11:50:07.626484  176850 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1127 11:50:07.748707  176850 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1127 11:50:07.748749  176850 buildroot.go:70] root file system type: tmpfs
	I1127 11:50:07.748906  176850 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1127 11:50:07.748933  176850 main.go:141] libmachine: (newest-cni-693564) Calling .GetSSHHostname
	I1127 11:50:07.751540  176850 main.go:141] libmachine: (newest-cni-693564) DBG | domain newest-cni-693564 has defined MAC address 52:54:00:f5:73:ab in network mk-newest-cni-693564
	I1127 11:50:07.751943  176850 main.go:141] libmachine: (newest-cni-693564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:73:ab", ip: ""} in network mk-newest-cni-693564: {Iface:virbr4 ExpiryTime:2023-11-27 12:50:00 +0000 UTC Type:0 Mac:52:54:00:f5:73:ab Iaid: IPaddr:192.168.72.37 Prefix:24 Hostname:newest-cni-693564 Clientid:01:52:54:00:f5:73:ab}
	I1127 11:50:07.751987  176850 main.go:141] libmachine: (newest-cni-693564) DBG | domain newest-cni-693564 has defined IP address 192.168.72.37 and MAC address 52:54:00:f5:73:ab in network mk-newest-cni-693564
	I1127 11:50:07.752130  176850 main.go:141] libmachine: (newest-cni-693564) Calling .GetSSHPort
	I1127 11:50:07.752378  176850 main.go:141] libmachine: (newest-cni-693564) Calling .GetSSHKeyPath
	I1127 11:50:07.752584  176850 main.go:141] libmachine: (newest-cni-693564) Calling .GetSSHKeyPath
	I1127 11:50:07.752755  176850 main.go:141] libmachine: (newest-cni-693564) Calling .GetSSHUsername
	I1127 11:50:07.752951  176850 main.go:141] libmachine: Using SSH client type: native
	I1127 11:50:07.753421  176850 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808940] 0x80b620 <nil>  [] 0s} 192.168.72.37 22 <nil> <nil>}
	I1127 11:50:07.753499  176850 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1127 11:50:07.889565  176850 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1127 11:50:07.889597  176850 main.go:141] libmachine: (newest-cni-693564) Calling .GetSSHHostname
	I1127 11:50:07.892568  176850 main.go:141] libmachine: (newest-cni-693564) DBG | domain newest-cni-693564 has defined MAC address 52:54:00:f5:73:ab in network mk-newest-cni-693564
	I1127 11:50:07.892951  176850 main.go:141] libmachine: (newest-cni-693564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:73:ab", ip: ""} in network mk-newest-cni-693564: {Iface:virbr4 ExpiryTime:2023-11-27 12:50:00 +0000 UTC Type:0 Mac:52:54:00:f5:73:ab Iaid: IPaddr:192.168.72.37 Prefix:24 Hostname:newest-cni-693564 Clientid:01:52:54:00:f5:73:ab}
	I1127 11:50:07.892980  176850 main.go:141] libmachine: (newest-cni-693564) DBG | domain newest-cni-693564 has defined IP address 192.168.72.37 and MAC address 52:54:00:f5:73:ab in network mk-newest-cni-693564
	I1127 11:50:07.893167  176850 main.go:141] libmachine: (newest-cni-693564) Calling .GetSSHPort
	I1127 11:50:07.893365  176850 main.go:141] libmachine: (newest-cni-693564) Calling .GetSSHKeyPath
	I1127 11:50:07.893531  176850 main.go:141] libmachine: (newest-cni-693564) Calling .GetSSHKeyPath
	I1127 11:50:07.893649  176850 main.go:141] libmachine: (newest-cni-693564) Calling .GetSSHUsername
	I1127 11:50:07.893785  176850 main.go:141] libmachine: Using SSH client type: native
	I1127 11:50:07.894193  176850 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808940] 0x80b620 <nil>  [] 0s} 192.168.72.37 22 <nil> <nil>}
	I1127 11:50:07.894223  176850 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1127 11:50:08.841022  176850 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1127 11:50:08.841054  176850 machine.go:91] provisioned docker machine in 1.88093665s
	I1127 11:50:08.841069  176850 start.go:300] post-start starting for "newest-cni-693564" (driver="kvm2")
	I1127 11:50:08.841083  176850 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1127 11:50:08.841111  176850 main.go:141] libmachine: (newest-cni-693564) Calling .DriverName
	I1127 11:50:08.841478  176850 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1127 11:50:08.841506  176850 main.go:141] libmachine: (newest-cni-693564) Calling .GetSSHHostname
	I1127 11:50:08.844453  176850 main.go:141] libmachine: (newest-cni-693564) DBG | domain newest-cni-693564 has defined MAC address 52:54:00:f5:73:ab in network mk-newest-cni-693564
	I1127 11:50:08.844832  176850 main.go:141] libmachine: (newest-cni-693564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:73:ab", ip: ""} in network mk-newest-cni-693564: {Iface:virbr4 ExpiryTime:2023-11-27 12:50:00 +0000 UTC Type:0 Mac:52:54:00:f5:73:ab Iaid: IPaddr:192.168.72.37 Prefix:24 Hostname:newest-cni-693564 Clientid:01:52:54:00:f5:73:ab}
	I1127 11:50:08.844863  176850 main.go:141] libmachine: (newest-cni-693564) DBG | domain newest-cni-693564 has defined IP address 192.168.72.37 and MAC address 52:54:00:f5:73:ab in network mk-newest-cni-693564
	I1127 11:50:08.844989  176850 main.go:141] libmachine: (newest-cni-693564) Calling .GetSSHPort
	I1127 11:50:08.845210  176850 main.go:141] libmachine: (newest-cni-693564) Calling .GetSSHKeyPath
	I1127 11:50:08.845410  176850 main.go:141] libmachine: (newest-cni-693564) Calling .GetSSHUsername
	I1127 11:50:08.845620  176850 sshutil.go:53] new ssh client: &{IP:192.168.72.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17644-122411/.minikube/machines/newest-cni-693564/id_rsa Username:docker}
	I1127 11:50:08.937493  176850 ssh_runner.go:195] Run: cat /etc/os-release
	I1127 11:50:08.942098  176850 info.go:137] Remote host: Buildroot 2021.02.12
	I1127 11:50:08.942130  176850 filesync.go:126] Scanning /home/jenkins/minikube-integration/17644-122411/.minikube/addons for local assets ...
	I1127 11:50:08.942198  176850 filesync.go:126] Scanning /home/jenkins/minikube-integration/17644-122411/.minikube/files for local assets ...
	I1127 11:50:08.942307  176850 filesync.go:149] local asset: /home/jenkins/minikube-integration/17644-122411/.minikube/files/etc/ssl/certs/1296532.pem -> 1296532.pem in /etc/ssl/certs
	I1127 11:50:08.942423  176850 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1127 11:50:08.951459  176850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17644-122411/.minikube/files/etc/ssl/certs/1296532.pem --> /etc/ssl/certs/1296532.pem (1708 bytes)
	I1127 11:50:08.974769  176850 start.go:303] post-start completed in 133.683642ms
	I1127 11:50:08.974793  176850 fix.go:56] fixHost completed within 21.540431636s
	I1127 11:50:08.974817  176850 main.go:141] libmachine: (newest-cni-693564) Calling .GetSSHHostname
	I1127 11:50:08.977778  176850 main.go:141] libmachine: (newest-cni-693564) DBG | domain newest-cni-693564 has defined MAC address 52:54:00:f5:73:ab in network mk-newest-cni-693564
	I1127 11:50:08.978205  176850 main.go:141] libmachine: (newest-cni-693564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:73:ab", ip: ""} in network mk-newest-cni-693564: {Iface:virbr4 ExpiryTime:2023-11-27 12:50:00 +0000 UTC Type:0 Mac:52:54:00:f5:73:ab Iaid: IPaddr:192.168.72.37 Prefix:24 Hostname:newest-cni-693564 Clientid:01:52:54:00:f5:73:ab}
	I1127 11:50:08.978271  176850 main.go:141] libmachine: (newest-cni-693564) DBG | domain newest-cni-693564 has defined IP address 192.168.72.37 and MAC address 52:54:00:f5:73:ab in network mk-newest-cni-693564
	I1127 11:50:08.978447  176850 main.go:141] libmachine: (newest-cni-693564) Calling .GetSSHPort
	I1127 11:50:08.978659  176850 main.go:141] libmachine: (newest-cni-693564) Calling .GetSSHKeyPath
	I1127 11:50:08.978928  176850 main.go:141] libmachine: (newest-cni-693564) Calling .GetSSHKeyPath
	I1127 11:50:08.979124  176850 main.go:141] libmachine: (newest-cni-693564) Calling .GetSSHUsername
	I1127 11:50:08.979394  176850 main.go:141] libmachine: Using SSH client type: native
	I1127 11:50:08.979700  176850 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808940] 0x80b620 <nil>  [] 0s} 192.168.72.37 22 <nil> <nil>}
	I1127 11:50:08.979712  176850 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1127 11:50:09.103911  176850 main.go:141] libmachine: SSH cmd err, output: <nil>: 1701085809.050271336
	
	I1127 11:50:09.103936  176850 fix.go:206] guest clock: 1701085809.050271336
	I1127 11:50:09.103944  176850 fix.go:219] Guest: 2023-11-27 11:50:09.050271336 +0000 UTC Remote: 2023-11-27 11:50:08.974797137 +0000 UTC m=+21.702543636 (delta=75.474199ms)
	I1127 11:50:09.103962  176850 fix.go:190] guest clock delta is within tolerance: 75.474199ms
	I1127 11:50:09.103968  176850 start.go:83] releasing machines lock for "newest-cni-693564", held for 21.669623858s
	I1127 11:50:09.103993  176850 main.go:141] libmachine: (newest-cni-693564) Calling .DriverName
	I1127 11:50:09.104305  176850 main.go:141] libmachine: (newest-cni-693564) Calling .GetIP
	I1127 11:50:09.107231  176850 main.go:141] libmachine: (newest-cni-693564) DBG | domain newest-cni-693564 has defined MAC address 52:54:00:f5:73:ab in network mk-newest-cni-693564
	I1127 11:50:09.107567  176850 main.go:141] libmachine: (newest-cni-693564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:73:ab", ip: ""} in network mk-newest-cni-693564: {Iface:virbr4 ExpiryTime:2023-11-27 12:50:00 +0000 UTC Type:0 Mac:52:54:00:f5:73:ab Iaid: IPaddr:192.168.72.37 Prefix:24 Hostname:newest-cni-693564 Clientid:01:52:54:00:f5:73:ab}
	I1127 11:50:09.107604  176850 main.go:141] libmachine: (newest-cni-693564) DBG | domain newest-cni-693564 has defined IP address 192.168.72.37 and MAC address 52:54:00:f5:73:ab in network mk-newest-cni-693564
	I1127 11:50:09.107800  176850 main.go:141] libmachine: (newest-cni-693564) Calling .DriverName
	I1127 11:50:09.108454  176850 main.go:141] libmachine: (newest-cni-693564) Calling .DriverName
	I1127 11:50:09.108642  176850 main.go:141] libmachine: (newest-cni-693564) Calling .DriverName
	I1127 11:50:09.108708  176850 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1127 11:50:09.108752  176850 main.go:141] libmachine: (newest-cni-693564) Calling .GetSSHHostname
	I1127 11:50:09.108923  176850 ssh_runner.go:195] Run: cat /version.json
	I1127 11:50:09.108950  176850 main.go:141] libmachine: (newest-cni-693564) Calling .GetSSHHostname
	I1127 11:50:09.111517  176850 main.go:141] libmachine: (newest-cni-693564) DBG | domain newest-cni-693564 has defined MAC address 52:54:00:f5:73:ab in network mk-newest-cni-693564
	I1127 11:50:09.111834  176850 main.go:141] libmachine: (newest-cni-693564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:73:ab", ip: ""} in network mk-newest-cni-693564: {Iface:virbr4 ExpiryTime:2023-11-27 12:50:00 +0000 UTC Type:0 Mac:52:54:00:f5:73:ab Iaid: IPaddr:192.168.72.37 Prefix:24 Hostname:newest-cni-693564 Clientid:01:52:54:00:f5:73:ab}
	I1127 11:50:09.111864  176850 main.go:141] libmachine: (newest-cni-693564) DBG | domain newest-cni-693564 has defined IP address 192.168.72.37 and MAC address 52:54:00:f5:73:ab in network mk-newest-cni-693564
	I1127 11:50:09.111912  176850 main.go:141] libmachine: (newest-cni-693564) DBG | domain newest-cni-693564 has defined MAC address 52:54:00:f5:73:ab in network mk-newest-cni-693564
	I1127 11:50:09.112037  176850 main.go:141] libmachine: (newest-cni-693564) Calling .GetSSHPort
	I1127 11:50:09.112192  176850 main.go:141] libmachine: (newest-cni-693564) Calling .GetSSHKeyPath
	I1127 11:50:09.112358  176850 main.go:141] libmachine: (newest-cni-693564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:73:ab", ip: ""} in network mk-newest-cni-693564: {Iface:virbr4 ExpiryTime:2023-11-27 12:50:00 +0000 UTC Type:0 Mac:52:54:00:f5:73:ab Iaid: IPaddr:192.168.72.37 Prefix:24 Hostname:newest-cni-693564 Clientid:01:52:54:00:f5:73:ab}
	I1127 11:50:09.112369  176850 main.go:141] libmachine: (newest-cni-693564) Calling .GetSSHUsername
	I1127 11:50:09.112401  176850 main.go:141] libmachine: (newest-cni-693564) DBG | domain newest-cni-693564 has defined IP address 192.168.72.37 and MAC address 52:54:00:f5:73:ab in network mk-newest-cni-693564
	I1127 11:50:09.112517  176850 main.go:141] libmachine: (newest-cni-693564) Calling .GetSSHPort
	I1127 11:50:09.112588  176850 sshutil.go:53] new ssh client: &{IP:192.168.72.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17644-122411/.minikube/machines/newest-cni-693564/id_rsa Username:docker}
	I1127 11:50:09.112657  176850 main.go:141] libmachine: (newest-cni-693564) Calling .GetSSHKeyPath
	I1127 11:50:09.112828  176850 main.go:141] libmachine: (newest-cni-693564) Calling .GetSSHUsername
	I1127 11:50:09.113013  176850 sshutil.go:53] new ssh client: &{IP:192.168.72.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17644-122411/.minikube/machines/newest-cni-693564/id_rsa Username:docker}
	I1127 11:50:09.226508  176850 ssh_runner.go:195] Run: systemctl --version
	I1127 11:50:09.232450  176850 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1127 11:50:09.238306  176850 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1127 11:50:09.238370  176850 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1127 11:50:09.253133  176850 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1127 11:50:09.253188  176850 start.go:472] detecting cgroup driver to use...
	I1127 11:50:09.253364  176850 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1127 11:50:09.272657  176850 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1127 11:50:09.282582  176850 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1127 11:50:09.291927  176850 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1127 11:50:09.291991  176850 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1127 11:50:09.301794  176850 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1127 11:50:09.311148  176850 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1127 11:50:09.320417  176850 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1127 11:50:09.330331  176850 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1127 11:50:09.341594  176850 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1127 11:50:09.352147  176850 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1127 11:50:09.360826  176850 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1127 11:50:09.369636  176850 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1127 11:50:09.486027  176850 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1127 11:50:09.503409  176850 start.go:472] detecting cgroup driver to use...
	I1127 11:50:09.503496  176850 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1127 11:50:09.519920  176850 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1127 11:50:09.534875  176850 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1127 11:50:09.553649  176850 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1127 11:50:09.565683  176850 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1127 11:50:09.577729  176850 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1127 11:50:09.606466  176850 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1127 11:50:09.619111  176850 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1127 11:50:09.637512  176850 ssh_runner.go:195] Run: which cri-dockerd
	I1127 11:50:09.641171  176850 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1127 11:50:09.649566  176850 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1127 11:50:09.666735  176850 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1127 11:50:09.784380  176850 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1127 11:50:09.910143  176850 docker.go:560] configuring docker to use "cgroupfs" as cgroup driver...
	I1127 11:50:09.910376  176850 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1127 11:50:09.927151  176850 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1127 11:50:10.046082  176850 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1127 11:50:11.552036  176850 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.50591477s)
	I1127 11:50:11.552096  176850 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1127 11:50:11.665365  176850 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1127 11:50:11.785305  176850 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1127 11:50:11.912230  176850 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1127 11:50:12.032477  176850 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1127 11:50:12.049231  176850 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1127 11:50:12.165306  176850 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I1127 11:50:12.251186  176850 start.go:519] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1127 11:50:12.251291  176850 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1127 11:50:12.258811  176850 start.go:540] Will wait 60s for crictl version
	I1127 11:50:12.258866  176850 ssh_runner.go:195] Run: which crictl
	I1127 11:50:12.263041  176850 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1127 11:50:12.331691  176850 start.go:556] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I1127 11:50:12.331777  176850 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1127 11:50:12.358753  176850 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1127 11:50:12.391931  176850 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
	I1127 11:50:12.391972  176850 main.go:141] libmachine: (newest-cni-693564) Calling .GetIP
	I1127 11:50:12.395075  176850 main.go:141] libmachine: (newest-cni-693564) DBG | domain newest-cni-693564 has defined MAC address 52:54:00:f5:73:ab in network mk-newest-cni-693564
	I1127 11:50:12.395486  176850 main.go:141] libmachine: (newest-cni-693564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:73:ab", ip: ""} in network mk-newest-cni-693564: {Iface:virbr4 ExpiryTime:2023-11-27 12:50:00 +0000 UTC Type:0 Mac:52:54:00:f5:73:ab Iaid: IPaddr:192.168.72.37 Prefix:24 Hostname:newest-cni-693564 Clientid:01:52:54:00:f5:73:ab}
	I1127 11:50:12.395529  176850 main.go:141] libmachine: (newest-cni-693564) DBG | domain newest-cni-693564 has defined IP address 192.168.72.37 and MAC address 52:54:00:f5:73:ab in network mk-newest-cni-693564
	I1127 11:50:12.395712  176850 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1127 11:50:12.399790  176850 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1127 11:50:12.413386  176850 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1127 11:50:12.508817  174671 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fmxzz" in "kube-system" namespace has status "Ready":"False"
	I1127 11:50:15.007797  174671 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fmxzz" in "kube-system" namespace has status "Ready":"False"
	I1127 11:50:11.476968  175460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbpl7" in "kube-system" namespace has status "Ready":"False"
	I1127 11:50:13.978804  175460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbpl7" in "kube-system" namespace has status "Ready":"False"
	I1127 11:50:12.256084  175050 pod_ready.go:102] pod "metrics-server-74d5856cc6-frk5n" in "kube-system" namespace has status "Ready":"False"
	I1127 11:50:14.755813  175050 pod_ready.go:102] pod "metrics-server-74d5856cc6-frk5n" in "kube-system" namespace has status "Ready":"False"
	I1127 11:50:12.414787  176850 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1127 11:50:12.414860  176850 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1127 11:50:12.437460  176850 docker.go:671] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/gvisor-addon:2
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1127 11:50:12.437492  176850 docker.go:601] Images already preloaded, skipping extraction
	I1127 11:50:12.437545  176850 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1127 11:50:12.459906  176850 docker.go:671] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/gvisor-addon:2
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1127 11:50:12.459940  176850 cache_images.go:84] Images are preloaded, skipping loading
	I1127 11:50:12.460001  176850 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1127 11:50:12.489570  176850 cni.go:84] Creating CNI manager for ""
	I1127 11:50:12.489598  176850 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1127 11:50:12.489623  176850 kubeadm.go:87] Using pod CIDR: 10.42.0.0/16
	I1127 11:50:12.489647  176850 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.72.37 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-693564 NodeName:newest-cni-693564 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.37"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureArgs:ma
p[] NodeIP:192.168.72.37 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1127 11:50:12.489838  176850 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.37
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "newest-cni-693564"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.37
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.37"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1127 11:50:12.489977  176850 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --feature-gates=ServerSideApply=true --hostname-override=newest-cni-693564 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.37
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:newest-cni-693564 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1127 11:50:12.490056  176850 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1127 11:50:12.500924  176850 binaries.go:44] Found k8s binaries, skipping transfer
	I1127 11:50:12.500990  176850 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1127 11:50:12.511029  176850 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (416 bytes)
	I1127 11:50:12.527769  176850 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1127 11:50:12.543871  176850 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2227 bytes)
	I1127 11:50:12.559999  176850 ssh_runner.go:195] Run: grep 192.168.72.37	control-plane.minikube.internal$ /etc/hosts
	I1127 11:50:12.563489  176850 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.37	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1127 11:50:12.574112  176850 certs.go:56] Setting up /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/newest-cni-693564 for IP: 192.168.72.37
	I1127 11:50:12.574149  176850 certs.go:190] acquiring lock for shared ca certs: {Name:mk258fc69412fb04a19c4d4246c928ef97503aa0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 11:50:12.574328  176850 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17644-122411/.minikube/ca.key
	I1127 11:50:12.574394  176850 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17644-122411/.minikube/proxy-client-ca.key
	I1127 11:50:12.574519  176850 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/newest-cni-693564/client.key
	I1127 11:50:12.574616  176850 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/newest-cni-693564/apiserver.key.594d9f9f
	I1127 11:50:12.574665  176850 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/newest-cni-693564/proxy-client.key
	I1127 11:50:12.574808  176850 certs.go:437] found cert: /home/jenkins/minikube-integration/17644-122411/.minikube/certs/home/jenkins/minikube-integration/17644-122411/.minikube/certs/129653.pem (1338 bytes)
	W1127 11:50:12.574847  176850 certs.go:433] ignoring /home/jenkins/minikube-integration/17644-122411/.minikube/certs/home/jenkins/minikube-integration/17644-122411/.minikube/certs/129653_empty.pem, impossibly tiny 0 bytes
	I1127 11:50:12.574862  176850 certs.go:437] found cert: /home/jenkins/minikube-integration/17644-122411/.minikube/certs/home/jenkins/minikube-integration/17644-122411/.minikube/certs/ca-key.pem (1675 bytes)
	I1127 11:50:12.574897  176850 certs.go:437] found cert: /home/jenkins/minikube-integration/17644-122411/.minikube/certs/home/jenkins/minikube-integration/17644-122411/.minikube/certs/ca.pem (1078 bytes)
	I1127 11:50:12.574937  176850 certs.go:437] found cert: /home/jenkins/minikube-integration/17644-122411/.minikube/certs/home/jenkins/minikube-integration/17644-122411/.minikube/certs/cert.pem (1123 bytes)
	I1127 11:50:12.574973  176850 certs.go:437] found cert: /home/jenkins/minikube-integration/17644-122411/.minikube/certs/home/jenkins/minikube-integration/17644-122411/.minikube/certs/key.pem (1679 bytes)
	I1127 11:50:12.575030  176850 certs.go:437] found cert: /home/jenkins/minikube-integration/17644-122411/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17644-122411/.minikube/files/etc/ssl/certs/1296532.pem (1708 bytes)
	I1127 11:50:12.576214  176850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/newest-cni-693564/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1127 11:50:12.599784  176850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/newest-cni-693564/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1127 11:50:12.622697  176850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/newest-cni-693564/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1127 11:50:12.645973  176850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/newest-cni-693564/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1127 11:50:12.669139  176850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17644-122411/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1127 11:50:12.690803  176850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17644-122411/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1127 11:50:12.713028  176850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17644-122411/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1127 11:50:12.736199  176850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17644-122411/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1127 11:50:12.761197  176850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17644-122411/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1127 11:50:12.783101  176850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17644-122411/.minikube/certs/129653.pem --> /usr/share/ca-certificates/129653.pem (1338 bytes)
	I1127 11:50:12.806109  176850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17644-122411/.minikube/files/etc/ssl/certs/1296532.pem --> /usr/share/ca-certificates/1296532.pem (1708 bytes)
	I1127 11:50:12.828228  176850 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1127 11:50:12.845288  176850 ssh_runner.go:195] Run: openssl version
	I1127 11:50:12.851048  176850 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/129653.pem && ln -fs /usr/share/ca-certificates/129653.pem /etc/ssl/certs/129653.pem"
	I1127 11:50:12.862107  176850 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/129653.pem
	I1127 11:50:12.867075  176850 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov 27 11:01 /usr/share/ca-certificates/129653.pem
	I1127 11:50:12.867135  176850 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/129653.pem
	I1127 11:50:12.872759  176850 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/129653.pem /etc/ssl/certs/51391683.0"
	I1127 11:50:12.883692  176850 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1296532.pem && ln -fs /usr/share/ca-certificates/1296532.pem /etc/ssl/certs/1296532.pem"
	I1127 11:50:12.894210  176850 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1296532.pem
	I1127 11:50:12.898856  176850 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov 27 11:01 /usr/share/ca-certificates/1296532.pem
	I1127 11:50:12.898900  176850 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1296532.pem
	I1127 11:50:12.904178  176850 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1296532.pem /etc/ssl/certs/3ec20f2e.0"
	I1127 11:50:12.916222  176850 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1127 11:50:12.927094  176850 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1127 11:50:12.932320  176850 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov 27 10:56 /usr/share/ca-certificates/minikubeCA.pem
	I1127 11:50:12.932375  176850 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1127 11:50:12.937870  176850 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1127 11:50:12.948028  176850 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1127 11:50:12.952669  176850 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1127 11:50:12.958314  176850 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1127 11:50:12.964046  176850 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1127 11:50:12.969762  176850 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1127 11:50:12.975914  176850 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1127 11:50:12.981874  176850 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1127 11:50:12.987283  176850 kubeadm.go:404] StartCluster: {Name:newest-cni-693564 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17634/minikube-v1.32.1-1700142131-17634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.4 ClusterName:newest-cni-693564 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.37 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:
true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1127 11:50:12.987399  176850 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1127 11:50:13.007048  176850 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1127 11:50:13.018746  176850 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1127 11:50:13.018764  176850 kubeadm.go:636] restartCluster start
	I1127 11:50:13.018814  176850 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1127 11:50:13.028145  176850 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1127 11:50:13.028928  176850 kubeconfig.go:135] verify returned: extract IP: "newest-cni-693564" does not appear in /home/jenkins/minikube-integration/17644-122411/kubeconfig
	I1127 11:50:13.029328  176850 kubeconfig.go:146] "newest-cni-693564" context is missing from /home/jenkins/minikube-integration/17644-122411/kubeconfig - will repair!
	I1127 11:50:13.030041  176850 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17644-122411/kubeconfig: {Name:mk165b6db416838b8311934f21a494f4c2865dd3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 11:50:13.031606  176850 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1127 11:50:13.040624  176850 api_server.go:166] Checking apiserver status ...
	I1127 11:50:13.040662  176850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1127 11:50:13.052583  176850 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1127 11:50:13.052599  176850 api_server.go:166] Checking apiserver status ...
	I1127 11:50:13.052640  176850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1127 11:50:13.063873  176850 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1127 11:50:13.564308  176850 api_server.go:166] Checking apiserver status ...
	I1127 11:50:13.564433  176850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1127 11:50:13.577769  176850 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1127 11:50:14.064320  176850 api_server.go:166] Checking apiserver status ...
	I1127 11:50:14.064435  176850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1127 11:50:14.078761  176850 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1127 11:50:14.564038  176850 api_server.go:166] Checking apiserver status ...
	I1127 11:50:14.564116  176850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1127 11:50:14.578006  176850 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1127 11:50:15.064325  176850 api_server.go:166] Checking apiserver status ...
	I1127 11:50:15.064426  176850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1127 11:50:15.076831  176850 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1127 11:50:15.564152  176850 api_server.go:166] Checking apiserver status ...
	I1127 11:50:15.564235  176850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1127 11:50:15.579061  176850 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1127 11:50:16.064469  176850 api_server.go:166] Checking apiserver status ...
	I1127 11:50:16.064567  176850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1127 11:50:16.078148  176850 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1127 11:50:16.564338  176850 api_server.go:166] Checking apiserver status ...
	I1127 11:50:16.564417  176850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1127 11:50:16.578271  176850 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1127 11:50:17.064875  176850 api_server.go:166] Checking apiserver status ...
	I1127 11:50:17.064963  176850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1127 11:50:17.078244  176850 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1127 11:50:17.508558  174671 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fmxzz" in "kube-system" namespace has status "Ready":"False"
	I1127 11:50:20.008236  174671 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fmxzz" in "kube-system" namespace has status "Ready":"False"
	I1127 11:50:16.478146  175460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbpl7" in "kube-system" namespace has status "Ready":"False"
	I1127 11:50:18.479639  175460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbpl7" in "kube-system" namespace has status "Ready":"False"
	I1127 11:50:17.255520  175050 pod_ready.go:102] pod "metrics-server-74d5856cc6-frk5n" in "kube-system" namespace has status "Ready":"False"
	I1127 11:50:19.754638  175050 pod_ready.go:102] pod "metrics-server-74d5856cc6-frk5n" in "kube-system" namespace has status "Ready":"False"
	I1127 11:50:17.564392  176850 api_server.go:166] Checking apiserver status ...
	I1127 11:50:17.564490  176850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1127 11:50:17.577078  176850 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1127 11:50:18.064710  176850 api_server.go:166] Checking apiserver status ...
	I1127 11:50:18.064800  176850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1127 11:50:18.076875  176850 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1127 11:50:18.564392  176850 api_server.go:166] Checking apiserver status ...
	I1127 11:50:18.564462  176850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1127 11:50:18.576717  176850 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1127 11:50:19.064245  176850 api_server.go:166] Checking apiserver status ...
	I1127 11:50:19.064359  176850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1127 11:50:19.076675  176850 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1127 11:50:19.564226  176850 api_server.go:166] Checking apiserver status ...
	I1127 11:50:19.564313  176850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1127 11:50:19.576811  176850 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1127 11:50:20.065005  176850 api_server.go:166] Checking apiserver status ...
	I1127 11:50:20.065081  176850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1127 11:50:20.077781  176850 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1127 11:50:20.564198  176850 api_server.go:166] Checking apiserver status ...
	I1127 11:50:20.564299  176850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1127 11:50:20.577053  176850 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1127 11:50:21.064676  176850 api_server.go:166] Checking apiserver status ...
	I1127 11:50:21.064768  176850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1127 11:50:21.078202  176850 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1127 11:50:21.564793  176850 api_server.go:166] Checking apiserver status ...
	I1127 11:50:21.564870  176850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1127 11:50:21.578697  176850 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1127 11:50:22.064259  176850 api_server.go:166] Checking apiserver status ...
	I1127 11:50:22.064363  176850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1127 11:50:22.078095  176850 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1127 11:50:22.511274  174671 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fmxzz" in "kube-system" namespace has status "Ready":"False"
	I1127 11:50:25.007922  174671 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fmxzz" in "kube-system" namespace has status "Ready":"False"
	I1127 11:50:20.977262  175460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbpl7" in "kube-system" namespace has status "Ready":"False"
	I1127 11:50:22.977635  175460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbpl7" in "kube-system" namespace has status "Ready":"False"
	I1127 11:50:21.754973  175050 pod_ready.go:102] pod "metrics-server-74d5856cc6-frk5n" in "kube-system" namespace has status "Ready":"False"
	I1127 11:50:23.755075  175050 pod_ready.go:102] pod "metrics-server-74d5856cc6-frk5n" in "kube-system" namespace has status "Ready":"False"
	I1127 11:50:25.756130  175050 pod_ready.go:102] pod "metrics-server-74d5856cc6-frk5n" in "kube-system" namespace has status "Ready":"False"
	I1127 11:50:22.564305  176850 api_server.go:166] Checking apiserver status ...
	I1127 11:50:22.564384  176850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1127 11:50:22.576405  176850 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1127 11:50:23.041225  176850 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1127 11:50:23.041271  176850 kubeadm.go:1128] stopping kube-system containers ...
	I1127 11:50:23.041341  176850 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1127 11:50:23.064015  176850 docker.go:469] Stopping containers: [515b32ce72d1 0fbe8a50d062 bb82b42031b5 826c1004b3c7 36263ec40b3c 13216adc18b8 697fa9a55373 dae56f3968ec c2df64f40fcc 7ebd29e54d90 47a5102a81fd 86d6fc9f2408 2e2ca0eaaa7a 10af6ccefabe e65ce5208f2c]
	I1127 11:50:23.064109  176850 ssh_runner.go:195] Run: docker stop 515b32ce72d1 0fbe8a50d062 bb82b42031b5 826c1004b3c7 36263ec40b3c 13216adc18b8 697fa9a55373 dae56f3968ec c2df64f40fcc 7ebd29e54d90 47a5102a81fd 86d6fc9f2408 2e2ca0eaaa7a 10af6ccefabe e65ce5208f2c
	I1127 11:50:23.086533  176850 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1127 11:50:23.101944  176850 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1127 11:50:23.110678  176850 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1127 11:50:23.110725  176850 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1127 11:50:23.118868  176850 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1127 11:50:23.118891  176850 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1127 11:50:23.255672  176850 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1127 11:50:24.105762  176850 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1127 11:50:24.298371  176850 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1127 11:50:24.399579  176850 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1127 11:50:24.494788  176850 api_server.go:52] waiting for apiserver process to appear ...
	I1127 11:50:24.494869  176850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1127 11:50:24.513269  176850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1127 11:50:25.029482  176850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1127 11:50:25.528989  176850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1127 11:50:26.029773  176850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1127 11:50:26.529468  176850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1127 11:50:26.559262  176850 api_server.go:72] duration metric: took 2.06447148s to wait for apiserver process to appear ...
	I1127 11:50:26.559291  176850 api_server.go:88] waiting for apiserver healthz status ...
	I1127 11:50:26.559308  176850 api_server.go:253] Checking apiserver healthz at https://192.168.72.37:8443/healthz ...
	I1127 11:50:26.559839  176850 api_server.go:269] stopped: https://192.168.72.37:8443/healthz: Get "https://192.168.72.37:8443/healthz": dial tcp 192.168.72.37:8443: connect: connection refused
	I1127 11:50:26.559896  176850 api_server.go:253] Checking apiserver healthz at https://192.168.72.37:8443/healthz ...
	I1127 11:50:26.560329  176850 api_server.go:269] stopped: https://192.168.72.37:8443/healthz: Get "https://192.168.72.37:8443/healthz": dial tcp 192.168.72.37:8443: connect: connection refused
	I1127 11:50:27.060613  176850 api_server.go:253] Checking apiserver healthz at https://192.168.72.37:8443/healthz ...
	I1127 11:50:27.009490  174671 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fmxzz" in "kube-system" namespace has status "Ready":"False"
	I1127 11:50:29.009795  174671 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fmxzz" in "kube-system" namespace has status "Ready":"False"
	I1127 11:50:25.476712  175460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbpl7" in "kube-system" namespace has status "Ready":"False"
	I1127 11:50:27.979247  175460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbpl7" in "kube-system" namespace has status "Ready":"False"
	I1127 11:50:29.979769  175460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbpl7" in "kube-system" namespace has status "Ready":"False"
	I1127 11:50:27.757784  175050 pod_ready.go:102] pod "metrics-server-74d5856cc6-frk5n" in "kube-system" namespace has status "Ready":"False"
	I1127 11:50:30.257365  175050 pod_ready.go:102] pod "metrics-server-74d5856cc6-frk5n" in "kube-system" namespace has status "Ready":"False"
	I1127 11:50:30.961152  176850 api_server.go:279] https://192.168.72.37:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1127 11:50:30.961198  176850 api_server.go:103] status: https://192.168.72.37:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1127 11:50:30.961217  176850 api_server.go:253] Checking apiserver healthz at https://192.168.72.37:8443/healthz ...
	I1127 11:50:31.019034  176850 api_server.go:279] https://192.168.72.37:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1127 11:50:31.019063  176850 api_server.go:103] status: https://192.168.72.37:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1127 11:50:31.061308  176850 api_server.go:253] Checking apiserver healthz at https://192.168.72.37:8443/healthz ...
	I1127 11:50:31.111424  176850 api_server.go:279] https://192.168.72.37:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1127 11:50:31.111462  176850 api_server.go:103] status: https://192.168.72.37:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1127 11:50:31.560665  176850 api_server.go:253] Checking apiserver healthz at https://192.168.72.37:8443/healthz ...
	I1127 11:50:31.566939  176850 api_server.go:279] https://192.168.72.37:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1127 11:50:31.566972  176850 api_server.go:103] status: https://192.168.72.37:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1127 11:50:32.061230  176850 api_server.go:253] Checking apiserver healthz at https://192.168.72.37:8443/healthz ...
	I1127 11:50:32.077914  176850 api_server.go:279] https://192.168.72.37:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1127 11:50:32.077953  176850 api_server.go:103] status: https://192.168.72.37:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1127 11:50:32.561230  176850 api_server.go:253] Checking apiserver healthz at https://192.168.72.37:8443/healthz ...
	I1127 11:50:32.567266  176850 api_server.go:279] https://192.168.72.37:8443/healthz returned 200:
	ok
	I1127 11:50:32.577394  176850 api_server.go:141] control plane version: v1.28.4
	I1127 11:50:32.577423  176850 api_server.go:131] duration metric: took 6.018123688s to wait for apiserver health ...
	I1127 11:50:32.577434  176850 cni.go:84] Creating CNI manager for ""
	I1127 11:50:32.577457  176850 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1127 11:50:32.579316  176850 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1127 11:50:32.580823  176850 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1127 11:50:32.591332  176850 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1127 11:50:32.640136  176850 system_pods.go:43] waiting for kube-system pods to appear ...
	I1127 11:50:32.653068  176850 system_pods.go:59] 8 kube-system pods found
	I1127 11:50:32.653109  176850 system_pods.go:61] "coredns-5dd5756b68-vgmhm" [9679fae0-f5ec-47fe-a76b-20ac17e4b7bb] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1127 11:50:32.653119  176850 system_pods.go:61] "etcd-newest-cni-693564" [e30d4691-4abb-46f0-8c8c-4fce6d35222b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1127 11:50:32.653132  176850 system_pods.go:61] "kube-apiserver-newest-cni-693564" [65a66c17-f651-4e0f-9937-d1ca71d844f6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1127 11:50:32.653142  176850 system_pods.go:61] "kube-controller-manager-newest-cni-693564" [f2e62fd9-7edb-402b-bc2e-7739e98b76d7] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1127 11:50:32.653152  176850 system_pods.go:61] "kube-proxy-tm46c" [e7a29bc5-a3b1-4a26-a63e-d7e2e451eb5d] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1127 11:50:32.653166  176850 system_pods.go:61] "kube-scheduler-newest-cni-693564" [951cb653-d84e-4b25-9d95-9becdedb8ec2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1127 11:50:32.653177  176850 system_pods.go:61] "metrics-server-57f55c9bc5-xtx5k" [12db0a88-bc71-4faf-8867-6b1cc1f67b8a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1127 11:50:32.653187  176850 system_pods.go:61] "storage-provisioner" [c5ab8540-d054-4636-89aa-45b5cdd5a4e5] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1127 11:50:32.653200  176850 system_pods.go:74] duration metric: took 13.035492ms to wait for pod list to return data ...
	I1127 11:50:32.653215  176850 node_conditions.go:102] verifying NodePressure condition ...
	I1127 11:50:32.656637  176850 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1127 11:50:32.656671  176850 node_conditions.go:123] node cpu capacity is 2
	I1127 11:50:32.656684  176850 node_conditions.go:105] duration metric: took 3.463227ms to run NodePressure ...
	I1127 11:50:32.656778  176850 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1127 11:50:33.202157  176850 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1127 11:50:33.220294  176850 ops.go:34] apiserver oom_adj: -16
	I1127 11:50:33.220323  176850 kubeadm.go:640] restartCluster took 20.201550369s
	I1127 11:50:33.220333  176850 kubeadm.go:406] StartCluster complete in 20.233059566s
	I1127 11:50:33.220355  176850 settings.go:142] acquiring lock: {Name:mk0bde143fb6a5b453a36dab2e4269e4e489beea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 11:50:33.220451  176850 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17644-122411/kubeconfig
	I1127 11:50:33.222329  176850 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17644-122411/kubeconfig: {Name:mk165b6db416838b8311934f21a494f4c2865dd3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 11:50:33.222613  176850 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1127 11:50:33.222730  176850 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1127 11:50:33.222810  176850 addons.go:69] Setting dashboard=true in profile "newest-cni-693564"
	I1127 11:50:33.222822  176850 addons.go:69] Setting default-storageclass=true in profile "newest-cni-693564"
	I1127 11:50:33.222832  176850 addons.go:231] Setting addon dashboard=true in "newest-cni-693564"
	I1127 11:50:33.222837  176850 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-693564"
	I1127 11:50:33.222870  176850 addons.go:69] Setting metrics-server=true in profile "newest-cni-693564"
	I1127 11:50:33.222871  176850 config.go:182] Loaded profile config "newest-cni-693564": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1127 11:50:33.222893  176850 addons.go:231] Setting addon metrics-server=true in "newest-cni-693564"
	W1127 11:50:33.222901  176850 addons.go:240] addon metrics-server should already be in state true
	I1127 11:50:33.222927  176850 cache.go:107] acquiring lock: {Name:mk395a86368ef8d463afdafe89a54fa575ce50bd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1127 11:50:33.222949  176850 host.go:66] Checking if "newest-cni-693564" exists ...
	I1127 11:50:33.222987  176850 cache.go:115] /home/jenkins/minikube-integration/17644-122411/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2 exists
	I1127 11:50:33.222995  176850 cache.go:96] cache image "gcr.io/k8s-minikube/gvisor-addon:2" -> "/home/jenkins/minikube-integration/17644-122411/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2" took 73.681µs
	I1127 11:50:33.223005  176850 cache.go:80] save to tar file gcr.io/k8s-minikube/gvisor-addon:2 -> /home/jenkins/minikube-integration/17644-122411/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2 succeeded
	I1127 11:50:33.223012  176850 cache.go:87] Successfully saved all images to host disk.
	I1127 11:50:33.223196  176850 config.go:182] Loaded profile config "newest-cni-693564": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1127 11:50:33.223279  176850 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1127 11:50:33.223309  176850 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 11:50:33.223353  176850 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	W1127 11:50:33.222849  176850 addons.go:240] addon dashboard should already be in state true
	I1127 11:50:33.223377  176850 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 11:50:33.222809  176850 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-693564"
	I1127 11:50:33.223451  176850 addons.go:231] Setting addon storage-provisioner=true in "newest-cni-693564"
	W1127 11:50:33.223461  176850 addons.go:240] addon storage-provisioner should already be in state true
	I1127 11:50:33.223491  176850 host.go:66] Checking if "newest-cni-693564" exists ...
	I1127 11:50:33.223524  176850 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1127 11:50:33.223544  176850 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 11:50:33.223750  176850 host.go:66] Checking if "newest-cni-693564" exists ...
	I1127 11:50:33.223893  176850 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1127 11:50:33.223929  176850 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 11:50:33.224124  176850 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1127 11:50:33.224145  176850 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 11:50:33.247327  176850 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35707
	I1127 11:50:33.247388  176850 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45237
	I1127 11:50:33.247335  176850 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40699
	I1127 11:50:33.247559  176850 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45983
	I1127 11:50:33.247622  176850 kapi.go:248] "coredns" deployment in "kube-system" namespace and "newest-cni-693564" context rescaled to 1 replicas
	I1127 11:50:33.247658  176850 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.37 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1127 11:50:33.249785  176850 out.go:177] * Verifying Kubernetes components...
	I1127 11:50:33.247998  176850 main.go:141] libmachine: () Calling .GetVersion
	I1127 11:50:33.248017  176850 main.go:141] libmachine: () Calling .GetVersion
	I1127 11:50:33.248048  176850 main.go:141] libmachine: () Calling .GetVersion
	I1127 11:50:33.248263  176850 main.go:141] libmachine: () Calling .GetVersion
	I1127 11:50:33.251428  176850 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1127 11:50:33.252027  176850 main.go:141] libmachine: Using API Version  1
	I1127 11:50:33.252052  176850 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 11:50:33.252148  176850 main.go:141] libmachine: Using API Version  1
	I1127 11:50:33.252164  176850 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 11:50:33.252399  176850 main.go:141] libmachine: () Calling .GetMachineName
	I1127 11:50:33.252597  176850 main.go:141] libmachine: () Calling .GetMachineName
	I1127 11:50:33.253039  176850 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1127 11:50:33.253065  176850 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 11:50:33.253201  176850 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1127 11:50:33.253224  176850 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 11:50:33.253336  176850 main.go:141] libmachine: Using API Version  1
	I1127 11:50:33.253352  176850 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 11:50:33.253516  176850 main.go:141] libmachine: Using API Version  1
	I1127 11:50:33.253538  176850 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 11:50:33.253889  176850 main.go:141] libmachine: () Calling .GetMachineName
	I1127 11:50:33.253977  176850 main.go:141] libmachine: () Calling .GetMachineName
	I1127 11:50:33.254154  176850 main.go:141] libmachine: (newest-cni-693564) Calling .GetState
	I1127 11:50:33.254501  176850 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1127 11:50:33.254529  176850 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 11:50:33.259280  176850 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42107
	I1127 11:50:33.260357  176850 addons.go:231] Setting addon default-storageclass=true in "newest-cni-693564"
	W1127 11:50:33.260369  176850 addons.go:240] addon default-storageclass should already be in state true
	I1127 11:50:33.260391  176850 host.go:66] Checking if "newest-cni-693564" exists ...
	I1127 11:50:33.260688  176850 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1127 11:50:33.260715  176850 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 11:50:33.261227  176850 main.go:141] libmachine: () Calling .GetVersion
	I1127 11:50:33.261654  176850 main.go:141] libmachine: Using API Version  1
	I1127 11:50:33.261680  176850 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 11:50:33.261953  176850 main.go:141] libmachine: () Calling .GetMachineName
	I1127 11:50:33.262073  176850 main.go:141] libmachine: (newest-cni-693564) Calling .GetState
	I1127 11:50:33.264217  176850 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1127 11:50:33.264247  176850 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 11:50:33.276273  176850 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39567
	I1127 11:50:33.276474  176850 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38733
	I1127 11:50:33.276961  176850 main.go:141] libmachine: () Calling .GetVersion
	I1127 11:50:33.277078  176850 main.go:141] libmachine: () Calling .GetVersion
	I1127 11:50:33.277647  176850 main.go:141] libmachine: Using API Version  1
	I1127 11:50:33.277666  176850 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 11:50:33.277804  176850 main.go:141] libmachine: Using API Version  1
	I1127 11:50:33.277829  176850 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 11:50:33.277812  176850 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46197
	I1127 11:50:33.278343  176850 main.go:141] libmachine: () Calling .GetMachineName
	I1127 11:50:33.278360  176850 main.go:141] libmachine: () Calling .GetVersion
	I1127 11:50:33.278365  176850 main.go:141] libmachine: () Calling .GetMachineName
	I1127 11:50:33.278582  176850 main.go:141] libmachine: (newest-cni-693564) Calling .GetState
	I1127 11:50:33.278633  176850 main.go:141] libmachine: (newest-cni-693564) Calling .GetState
	I1127 11:50:33.278883  176850 main.go:141] libmachine: Using API Version  1
	I1127 11:50:33.278903  176850 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 11:50:33.279292  176850 main.go:141] libmachine: () Calling .GetMachineName
	I1127 11:50:33.280691  176850 main.go:141] libmachine: (newest-cni-693564) Calling .DriverName
	I1127 11:50:33.282534  176850 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1127 11:50:33.284154  176850 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1127 11:50:33.281674  176850 main.go:141] libmachine: (newest-cni-693564) Calling .DriverName
	I1127 11:50:33.282404  176850 main.go:141] libmachine: (newest-cni-693564) Calling .GetState
	I1127 11:50:33.284217  176850 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1127 11:50:33.284236  176850 main.go:141] libmachine: (newest-cni-693564) Calling .GetSSHHostname
	I1127 11:50:33.286049  176850 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1127 11:50:33.284883  176850 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46341
	I1127 11:50:33.287081  176850 main.go:141] libmachine: (newest-cni-693564) Calling .DriverName
	I1127 11:50:33.287585  176850 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1127 11:50:33.287604  176850 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1127 11:50:33.287623  176850 main.go:141] libmachine: (newest-cni-693564) Calling .GetSSHHostname
	I1127 11:50:33.290389  176850 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1127 11:50:33.288430  176850 main.go:141] libmachine: (newest-cni-693564) DBG | domain newest-cni-693564 has defined MAC address 52:54:00:f5:73:ab in network mk-newest-cni-693564
	I1127 11:50:33.289015  176850 main.go:141] libmachine: () Calling .GetVersion
	I1127 11:50:33.289754  176850 main.go:141] libmachine: (newest-cni-693564) Calling .GetSSHPort
	I1127 11:50:33.291044  176850 main.go:141] libmachine: (newest-cni-693564) DBG | domain newest-cni-693564 has defined MAC address 52:54:00:f5:73:ab in network mk-newest-cni-693564
	I1127 11:50:33.293135  176850 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I1127 11:50:33.291810  176850 main.go:141] libmachine: (newest-cni-693564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:73:ab", ip: ""} in network mk-newest-cni-693564: {Iface:virbr4 ExpiryTime:2023-11-27 12:50:00 +0000 UTC Type:0 Mac:52:54:00:f5:73:ab Iaid: IPaddr:192.168.72.37 Prefix:24 Hostname:newest-cni-693564 Clientid:01:52:54:00:f5:73:ab}
	I1127 11:50:33.293177  176850 main.go:141] libmachine: (newest-cni-693564) DBG | domain newest-cni-693564 has defined IP address 192.168.72.37 and MAC address 52:54:00:f5:73:ab in network mk-newest-cni-693564
	I1127 11:50:33.291697  176850 main.go:141] libmachine: (newest-cni-693564) Calling .GetSSHPort
	I1127 11:50:33.291842  176850 main.go:141] libmachine: (newest-cni-693564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:73:ab", ip: ""} in network mk-newest-cni-693564: {Iface:virbr4 ExpiryTime:2023-11-27 12:50:00 +0000 UTC Type:0 Mac:52:54:00:f5:73:ab Iaid: IPaddr:192.168.72.37 Prefix:24 Hostname:newest-cni-693564 Clientid:01:52:54:00:f5:73:ab}
	I1127 11:50:33.291662  176850 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40895
	I1127 11:50:33.293247  176850 main.go:141] libmachine: (newest-cni-693564) DBG | domain newest-cni-693564 has defined IP address 192.168.72.37 and MAC address 52:54:00:f5:73:ab in network mk-newest-cni-693564
	I1127 11:50:33.292030  176850 main.go:141] libmachine: (newest-cni-693564) Calling .GetSSHKeyPath
	I1127 11:50:33.292301  176850 main.go:141] libmachine: Using API Version  1
	I1127 11:50:33.293377  176850 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 11:50:31.507832  174671 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fmxzz" in "kube-system" namespace has status "Ready":"False"
	I1127 11:50:33.510704  174671 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fmxzz" in "kube-system" namespace has status "Ready":"False"
	I1127 11:50:32.477213  175460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbpl7" in "kube-system" namespace has status "Ready":"False"
	I1127 11:50:34.481146  175460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbpl7" in "kube-system" namespace has status "Ready":"False"
	I1127 11:50:33.294709  176850 addons.go:423] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1127 11:50:33.293433  176850 main.go:141] libmachine: (newest-cni-693564) Calling .GetSSHKeyPath
	I1127 11:50:33.294729  176850 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1127 11:50:33.294749  176850 main.go:141] libmachine: (newest-cni-693564) Calling .GetSSHHostname
	I1127 11:50:33.293458  176850 main.go:141] libmachine: (newest-cni-693564) Calling .GetSSHUsername
	I1127 11:50:33.293673  176850 main.go:141] libmachine: () Calling .GetVersion
	I1127 11:50:33.294001  176850 main.go:141] libmachine: () Calling .GetMachineName
	I1127 11:50:33.295322  176850 sshutil.go:53] new ssh client: &{IP:192.168.72.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17644-122411/.minikube/machines/newest-cni-693564/id_rsa Username:docker}
	I1127 11:50:33.295363  176850 main.go:141] libmachine: Using API Version  1
	I1127 11:50:33.295378  176850 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 11:50:33.295434  176850 main.go:141] libmachine: (newest-cni-693564) Calling .GetSSHUsername
	I1127 11:50:33.295571  176850 sshutil.go:53] new ssh client: &{IP:192.168.72.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17644-122411/.minikube/machines/newest-cni-693564/id_rsa Username:docker}
	I1127 11:50:33.296089  176850 main.go:141] libmachine: () Calling .GetMachineName
	I1127 11:50:33.296217  176850 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1127 11:50:33.296249  176850 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 11:50:33.296678  176850 main.go:141] libmachine: (newest-cni-693564) Calling .DriverName
	I1127 11:50:33.296901  176850 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1127 11:50:33.296924  176850 main.go:141] libmachine: (newest-cni-693564) Calling .GetSSHHostname
	I1127 11:50:33.297219  176850 main.go:141] libmachine: (newest-cni-693564) DBG | domain newest-cni-693564 has defined MAC address 52:54:00:f5:73:ab in network mk-newest-cni-693564
	I1127 11:50:33.297651  176850 main.go:141] libmachine: (newest-cni-693564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:73:ab", ip: ""} in network mk-newest-cni-693564: {Iface:virbr4 ExpiryTime:2023-11-27 12:50:00 +0000 UTC Type:0 Mac:52:54:00:f5:73:ab Iaid: IPaddr:192.168.72.37 Prefix:24 Hostname:newest-cni-693564 Clientid:01:52:54:00:f5:73:ab}
	I1127 11:50:33.297672  176850 main.go:141] libmachine: (newest-cni-693564) DBG | domain newest-cni-693564 has defined IP address 192.168.72.37 and MAC address 52:54:00:f5:73:ab in network mk-newest-cni-693564
	I1127 11:50:33.298001  176850 main.go:141] libmachine: (newest-cni-693564) Calling .GetSSHPort
	I1127 11:50:33.298197  176850 main.go:141] libmachine: (newest-cni-693564) Calling .GetSSHKeyPath
	I1127 11:50:33.298360  176850 main.go:141] libmachine: (newest-cni-693564) Calling .GetSSHUsername
	I1127 11:50:33.298526  176850 sshutil.go:53] new ssh client: &{IP:192.168.72.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17644-122411/.minikube/machines/newest-cni-693564/id_rsa Username:docker}
	I1127 11:50:33.299707  176850 main.go:141] libmachine: (newest-cni-693564) DBG | domain newest-cni-693564 has defined MAC address 52:54:00:f5:73:ab in network mk-newest-cni-693564
	I1127 11:50:33.300049  176850 main.go:141] libmachine: (newest-cni-693564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:73:ab", ip: ""} in network mk-newest-cni-693564: {Iface:virbr4 ExpiryTime:2023-11-27 12:50:00 +0000 UTC Type:0 Mac:52:54:00:f5:73:ab Iaid: IPaddr:192.168.72.37 Prefix:24 Hostname:newest-cni-693564 Clientid:01:52:54:00:f5:73:ab}
	I1127 11:50:33.300378  176850 main.go:141] libmachine: (newest-cni-693564) Calling .GetSSHPort
	I1127 11:50:33.300415  176850 main.go:141] libmachine: (newest-cni-693564) DBG | domain newest-cni-693564 has defined IP address 192.168.72.37 and MAC address 52:54:00:f5:73:ab in network mk-newest-cni-693564
	I1127 11:50:33.300572  176850 main.go:141] libmachine: (newest-cni-693564) Calling .GetSSHKeyPath
	I1127 11:50:33.300731  176850 main.go:141] libmachine: (newest-cni-693564) Calling .GetSSHUsername
	I1127 11:50:33.300979  176850 sshutil.go:53] new ssh client: &{IP:192.168.72.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17644-122411/.minikube/machines/newest-cni-693564/id_rsa Username:docker}
	I1127 11:50:33.348452  176850 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42407
	I1127 11:50:33.349320  176850 main.go:141] libmachine: () Calling .GetVersion
	I1127 11:50:33.350282  176850 main.go:141] libmachine: Using API Version  1
	I1127 11:50:33.350303  176850 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 11:50:33.350782  176850 main.go:141] libmachine: () Calling .GetMachineName
	I1127 11:50:33.351026  176850 main.go:141] libmachine: (newest-cni-693564) Calling .GetState
	I1127 11:50:33.352853  176850 main.go:141] libmachine: (newest-cni-693564) Calling .DriverName
	I1127 11:50:33.353148  176850 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1127 11:50:33.353170  176850 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1127 11:50:33.353191  176850 main.go:141] libmachine: (newest-cni-693564) Calling .GetSSHHostname
	I1127 11:50:33.355808  176850 main.go:141] libmachine: (newest-cni-693564) DBG | domain newest-cni-693564 has defined MAC address 52:54:00:f5:73:ab in network mk-newest-cni-693564
	I1127 11:50:33.356269  176850 main.go:141] libmachine: (newest-cni-693564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f5:73:ab", ip: ""} in network mk-newest-cni-693564: {Iface:virbr4 ExpiryTime:2023-11-27 12:50:00 +0000 UTC Type:0 Mac:52:54:00:f5:73:ab Iaid: IPaddr:192.168.72.37 Prefix:24 Hostname:newest-cni-693564 Clientid:01:52:54:00:f5:73:ab}
	I1127 11:50:33.356301  176850 main.go:141] libmachine: (newest-cni-693564) DBG | domain newest-cni-693564 has defined IP address 192.168.72.37 and MAC address 52:54:00:f5:73:ab in network mk-newest-cni-693564
	I1127 11:50:33.356487  176850 main.go:141] libmachine: (newest-cni-693564) Calling .GetSSHPort
	I1127 11:50:33.356649  176850 main.go:141] libmachine: (newest-cni-693564) Calling .GetSSHKeyPath
	I1127 11:50:33.356823  176850 main.go:141] libmachine: (newest-cni-693564) Calling .GetSSHUsername
	I1127 11:50:33.357188  176850 sshutil.go:53] new ssh client: &{IP:192.168.72.37 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17644-122411/.minikube/machines/newest-cni-693564/id_rsa Username:docker}
	I1127 11:50:33.530109  176850 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1127 11:50:33.551013  176850 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1127 11:50:33.551041  176850 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1127 11:50:33.631878  176850 addons.go:423] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1127 11:50:33.631916  176850 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1127 11:50:33.636133  176850 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1127 11:50:33.636216  176850 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1127 11:50:33.648775  176850 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1127 11:50:33.707238  176850 addons.go:423] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1127 11:50:33.707270  176850 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1127 11:50:33.710530  176850 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1127 11:50:33.710605  176850 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1127 11:50:33.754178  176850 api_server.go:52] waiting for apiserver process to appear ...
	I1127 11:50:33.754258  176850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1127 11:50:33.754393  176850 docker.go:671] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/gvisor-addon:2
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1127 11:50:33.754415  176850 cache_images.go:84] Images are preloaded, skipping loading
	I1127 11:50:33.754466  176850 cache_images.go:262] succeeded pushing to: newest-cni-693564
	I1127 11:50:33.754471  176850 cache_images.go:263] failed pushing to: 
	I1127 11:50:33.754507  176850 main.go:141] libmachine: Making call to close driver server
	I1127 11:50:33.754525  176850 main.go:141] libmachine: (newest-cni-693564) Calling .Close
	I1127 11:50:33.754660  176850 start.go:899] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1127 11:50:33.754873  176850 main.go:141] libmachine: Successfully made call to close driver server
	I1127 11:50:33.754894  176850 main.go:141] libmachine: Making call to close connection to plugin binary
	I1127 11:50:33.754905  176850 main.go:141] libmachine: Making call to close driver server
	I1127 11:50:33.754915  176850 main.go:141] libmachine: (newest-cni-693564) Calling .Close
	I1127 11:50:33.758491  176850 main.go:141] libmachine: (newest-cni-693564) DBG | Closing plugin on server side
	I1127 11:50:33.758517  176850 main.go:141] libmachine: Successfully made call to close driver server
	I1127 11:50:33.758534  176850 main.go:141] libmachine: Making call to close connection to plugin binary
	I1127 11:50:33.800713  176850 addons.go:423] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1127 11:50:33.800752  176850 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1127 11:50:33.811366  176850 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1127 11:50:33.856175  176850 addons.go:423] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1127 11:50:33.856206  176850 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1127 11:50:33.954290  176850 addons.go:423] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1127 11:50:33.954314  176850 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1127 11:50:34.031852  176850 addons.go:423] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1127 11:50:34.031884  176850 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1127 11:50:34.102840  176850 addons.go:423] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1127 11:50:34.102863  176850 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1127 11:50:34.128131  176850 addons.go:423] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1127 11:50:34.128156  176850 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1127 11:50:34.148468  176850 addons.go:423] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1127 11:50:34.148489  176850 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1127 11:50:34.168837  176850 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1127 11:50:35.558686  176850 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.909808703s)
	I1127 11:50:35.558741  176850 main.go:141] libmachine: Making call to close driver server
	I1127 11:50:35.558745  176850 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.80446459s)
	I1127 11:50:35.558782  176850 api_server.go:72] duration metric: took 2.311091041s to wait for apiserver process to appear ...
	I1127 11:50:35.558792  176850 api_server.go:88] waiting for apiserver healthz status ...
	I1127 11:50:35.558754  176850 main.go:141] libmachine: (newest-cni-693564) Calling .Close
	I1127 11:50:35.558809  176850 api_server.go:253] Checking apiserver healthz at https://192.168.72.37:8443/healthz ...
	I1127 11:50:35.558861  176850 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.747466491s)
	I1127 11:50:35.558905  176850 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.028764014s)
	I1127 11:50:35.558913  176850 main.go:141] libmachine: Making call to close driver server
	I1127 11:50:35.558950  176850 main.go:141] libmachine: Making call to close driver server
	I1127 11:50:35.558956  176850 main.go:141] libmachine: (newest-cni-693564) Calling .Close
	I1127 11:50:35.558967  176850 main.go:141] libmachine: (newest-cni-693564) Calling .Close
	I1127 11:50:35.559126  176850 main.go:141] libmachine: Successfully made call to close driver server
	I1127 11:50:35.559159  176850 main.go:141] libmachine: Making call to close connection to plugin binary
	I1127 11:50:35.559273  176850 main.go:141] libmachine: (newest-cni-693564) DBG | Closing plugin on server side
	I1127 11:50:35.559205  176850 main.go:141] libmachine: Successfully made call to close driver server
	I1127 11:50:35.559280  176850 main.go:141] libmachine: Making call to close driver server
	I1127 11:50:35.559295  176850 main.go:141] libmachine: Making call to close connection to plugin binary
	I1127 11:50:35.559302  176850 main.go:141] libmachine: (newest-cni-693564) Calling .Close
	I1127 11:50:35.559305  176850 main.go:141] libmachine: Making call to close driver server
	I1127 11:50:35.559314  176850 main.go:141] libmachine: (newest-cni-693564) Calling .Close
	I1127 11:50:35.559245  176850 main.go:141] libmachine: (newest-cni-693564) DBG | Closing plugin on server side
	I1127 11:50:35.559262  176850 main.go:141] libmachine: Successfully made call to close driver server
	I1127 11:50:35.559369  176850 main.go:141] libmachine: Making call to close connection to plugin binary
	I1127 11:50:35.559380  176850 main.go:141] libmachine: Making call to close driver server
	I1127 11:50:35.559395  176850 main.go:141] libmachine: (newest-cni-693564) Calling .Close
	I1127 11:50:35.559143  176850 main.go:141] libmachine: (newest-cni-693564) DBG | Closing plugin on server side
	I1127 11:50:35.559890  176850 main.go:141] libmachine: (newest-cni-693564) DBG | Closing plugin on server side
	I1127 11:50:35.559890  176850 main.go:141] libmachine: (newest-cni-693564) DBG | Closing plugin on server side
	I1127 11:50:35.559916  176850 main.go:141] libmachine: Successfully made call to close driver server
	I1127 11:50:35.559925  176850 main.go:141] libmachine: Successfully made call to close driver server
	I1127 11:50:35.559927  176850 main.go:141] libmachine: Making call to close connection to plugin binary
	I1127 11:50:35.559933  176850 main.go:141] libmachine: Making call to close connection to plugin binary
	I1127 11:50:35.559940  176850 addons.go:467] Verifying addon metrics-server=true in "newest-cni-693564"
	I1127 11:50:35.561091  176850 main.go:141] libmachine: Successfully made call to close driver server
	I1127 11:50:35.561110  176850 main.go:141] libmachine: Making call to close connection to plugin binary
	I1127 11:50:35.567102  176850 main.go:141] libmachine: Making call to close driver server
	I1127 11:50:35.567118  176850 main.go:141] libmachine: (newest-cni-693564) Calling .Close
	I1127 11:50:35.567377  176850 main.go:141] libmachine: Successfully made call to close driver server
	I1127 11:50:35.567392  176850 main.go:141] libmachine: Making call to close connection to plugin binary
	I1127 11:50:35.567405  176850 main.go:141] libmachine: (newest-cni-693564) DBG | Closing plugin on server side
	I1127 11:50:35.568320  176850 api_server.go:279] https://192.168.72.37:8443/healthz returned 200:
	ok
	I1127 11:50:35.569395  176850 api_server.go:141] control plane version: v1.28.4
	I1127 11:50:35.569419  176850 api_server.go:131] duration metric: took 10.618883ms to wait for apiserver health ...
	I1127 11:50:35.569429  176850 system_pods.go:43] waiting for kube-system pods to appear ...
	I1127 11:50:35.575352  176850 system_pods.go:59] 8 kube-system pods found
	I1127 11:50:35.575375  176850 system_pods.go:61] "coredns-5dd5756b68-vgmhm" [9679fae0-f5ec-47fe-a76b-20ac17e4b7bb] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1127 11:50:35.575383  176850 system_pods.go:61] "etcd-newest-cni-693564" [e30d4691-4abb-46f0-8c8c-4fce6d35222b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1127 11:50:35.575396  176850 system_pods.go:61] "kube-apiserver-newest-cni-693564" [65a66c17-f651-4e0f-9937-d1ca71d844f6] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1127 11:50:35.575407  176850 system_pods.go:61] "kube-controller-manager-newest-cni-693564" [f2e62fd9-7edb-402b-bc2e-7739e98b76d7] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1127 11:50:35.575424  176850 system_pods.go:61] "kube-proxy-tm46c" [e7a29bc5-a3b1-4a26-a63e-d7e2e451eb5d] Running
	I1127 11:50:35.575437  176850 system_pods.go:61] "kube-scheduler-newest-cni-693564" [951cb653-d84e-4b25-9d95-9becdedb8ec2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1127 11:50:35.575443  176850 system_pods.go:61] "metrics-server-57f55c9bc5-xtx5k" [12db0a88-bc71-4faf-8867-6b1cc1f67b8a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1127 11:50:35.575450  176850 system_pods.go:61] "storage-provisioner" [c5ab8540-d054-4636-89aa-45b5cdd5a4e5] Running
	I1127 11:50:35.575456  176850 system_pods.go:74] duration metric: took 6.020881ms to wait for pod list to return data ...
	I1127 11:50:35.575464  176850 default_sa.go:34] waiting for default service account to be created ...
	I1127 11:50:35.577631  176850 default_sa.go:45] found service account: "default"
	I1127 11:50:35.577651  176850 default_sa.go:55] duration metric: took 2.179851ms for default service account to be created ...
	I1127 11:50:35.577660  176850 kubeadm.go:581] duration metric: took 2.329971927s to wait for : map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] ...
	I1127 11:50:35.577679  176850 node_conditions.go:102] verifying NodePressure condition ...
	I1127 11:50:35.580113  176850 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1127 11:50:35.580137  176850 node_conditions.go:123] node cpu capacity is 2
	I1127 11:50:35.580146  176850 node_conditions.go:105] duration metric: took 2.459303ms to run NodePressure ...
	I1127 11:50:35.580164  176850 start.go:228] waiting for startup goroutines ...
	I1127 11:50:36.064927  176850 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.896038649s)
	I1127 11:50:36.064989  176850 main.go:141] libmachine: Making call to close driver server
	I1127 11:50:36.065007  176850 main.go:141] libmachine: (newest-cni-693564) Calling .Close
	I1127 11:50:36.065357  176850 main.go:141] libmachine: Successfully made call to close driver server
	I1127 11:50:36.065392  176850 main.go:141] libmachine: Making call to close connection to plugin binary
	I1127 11:50:36.065512  176850 main.go:141] libmachine: Making call to close driver server
	I1127 11:50:36.065529  176850 main.go:141] libmachine: (newest-cni-693564) Calling .Close
	I1127 11:50:36.067128  176850 main.go:141] libmachine: Successfully made call to close driver server
	I1127 11:50:36.067147  176850 main.go:141] libmachine: Making call to close connection to plugin binary
	I1127 11:50:36.069200  176850 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-693564 addons enable metrics-server	
	
	
	I1127 11:50:36.070887  176850 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass, dashboard
	I1127 11:50:36.072399  176850 addons.go:502] enable addons completed in 2.849676729s: enabled=[metrics-server storage-provisioner default-storageclass dashboard]
	I1127 11:50:36.072447  176850 start.go:233] waiting for cluster config update ...
	I1127 11:50:36.072462  176850 start.go:242] writing updated cluster config ...
	I1127 11:50:36.072709  176850 ssh_runner.go:195] Run: rm -f paused
	I1127 11:50:36.122640  176850 start.go:600] kubectl: 1.28.4, cluster: 1.28.4 (minor skew: 0)
	I1127 11:50:36.124548  176850 out.go:177] * Done! kubectl is now configured to use "newest-cni-693564" cluster and "default" namespace by default
	I1127 11:50:32.755491  175050 pod_ready.go:102] pod "metrics-server-74d5856cc6-frk5n" in "kube-system" namespace has status "Ready":"False"
	I1127 11:50:34.757810  175050 pod_ready.go:102] pod "metrics-server-74d5856cc6-frk5n" in "kube-system" namespace has status "Ready":"False"
	I1127 11:50:36.009852  174671 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fmxzz" in "kube-system" namespace has status "Ready":"False"
	I1127 11:50:38.511399  174671 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fmxzz" in "kube-system" namespace has status "Ready":"False"
	I1127 11:50:36.978258  175460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbpl7" in "kube-system" namespace has status "Ready":"False"
	I1127 11:50:38.980017  175460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbpl7" in "kube-system" namespace has status "Ready":"False"
	I1127 11:50:37.257339  175050 pod_ready.go:102] pod "metrics-server-74d5856cc6-frk5n" in "kube-system" namespace has status "Ready":"False"
	I1127 11:50:39.755439  175050 pod_ready.go:102] pod "metrics-server-74d5856cc6-frk5n" in "kube-system" namespace has status "Ready":"False"
	I1127 11:50:41.007807  174671 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fmxzz" in "kube-system" namespace has status "Ready":"False"
	I1127 11:50:43.507401  174671 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fmxzz" in "kube-system" namespace has status "Ready":"False"
	I1127 11:50:41.477625  175460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbpl7" in "kube-system" namespace has status "Ready":"False"
	I1127 11:50:43.977968  175460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbpl7" in "kube-system" namespace has status "Ready":"False"
	I1127 11:50:42.255008  175050 pod_ready.go:102] pod "metrics-server-74d5856cc6-frk5n" in "kube-system" namespace has status "Ready":"False"
	I1127 11:50:44.255601  175050 pod_ready.go:102] pod "metrics-server-74d5856cc6-frk5n" in "kube-system" namespace has status "Ready":"False"
	I1127 11:50:46.255943  175050 pod_ready.go:102] pod "metrics-server-74d5856cc6-frk5n" in "kube-system" namespace has status "Ready":"False"
	I1127 11:50:45.508466  174671 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fmxzz" in "kube-system" namespace has status "Ready":"False"
	I1127 11:50:47.508888  174671 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fmxzz" in "kube-system" namespace has status "Ready":"False"
	I1127 11:50:50.008828  174671 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fmxzz" in "kube-system" namespace has status "Ready":"False"
	I1127 11:50:46.477740  175460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbpl7" in "kube-system" namespace has status "Ready":"False"
	I1127 11:50:48.977216  175460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbpl7" in "kube-system" namespace has status "Ready":"False"
	I1127 11:50:48.256002  175050 pod_ready.go:102] pod "metrics-server-74d5856cc6-frk5n" in "kube-system" namespace has status "Ready":"False"
	I1127 11:50:50.754798  175050 pod_ready.go:102] pod "metrics-server-74d5856cc6-frk5n" in "kube-system" namespace has status "Ready":"False"
	I1127 11:50:52.508184  174671 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fmxzz" in "kube-system" namespace has status "Ready":"False"
	I1127 11:50:54.508311  174671 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fmxzz" in "kube-system" namespace has status "Ready":"False"
	I1127 11:50:50.978915  175460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbpl7" in "kube-system" namespace has status "Ready":"False"
	I1127 11:50:52.978950  175460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbpl7" in "kube-system" namespace has status "Ready":"False"
	I1127 11:50:52.756486  175050 pod_ready.go:102] pod "metrics-server-74d5856cc6-frk5n" in "kube-system" namespace has status "Ready":"False"
	I1127 11:50:55.255729  175050 pod_ready.go:102] pod "metrics-server-74d5856cc6-frk5n" in "kube-system" namespace has status "Ready":"False"
	I1127 11:50:57.009400  174671 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fmxzz" in "kube-system" namespace has status "Ready":"False"
	I1127 11:50:59.508399  174671 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fmxzz" in "kube-system" namespace has status "Ready":"False"
	I1127 11:50:55.476905  175460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbpl7" in "kube-system" namespace has status "Ready":"False"
	I1127 11:50:57.479251  175460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbpl7" in "kube-system" namespace has status "Ready":"False"
	I1127 11:50:59.977946  175460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbpl7" in "kube-system" namespace has status "Ready":"False"
	I1127 11:50:57.256263  175050 pod_ready.go:102] pod "metrics-server-74d5856cc6-frk5n" in "kube-system" namespace has status "Ready":"False"
	I1127 11:50:59.759966  175050 pod_ready.go:102] pod "metrics-server-74d5856cc6-frk5n" in "kube-system" namespace has status "Ready":"False"
	I1127 11:51:01.508730  174671 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fmxzz" in "kube-system" namespace has status "Ready":"False"
	I1127 11:51:04.011513  174671 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fmxzz" in "kube-system" namespace has status "Ready":"False"
	I1127 11:51:01.978475  175460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbpl7" in "kube-system" namespace has status "Ready":"False"
	I1127 11:51:04.478054  175460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbpl7" in "kube-system" namespace has status "Ready":"False"
	I1127 11:51:02.255537  175050 pod_ready.go:102] pod "metrics-server-74d5856cc6-frk5n" in "kube-system" namespace has status "Ready":"False"
	I1127 11:51:04.257526  175050 pod_ready.go:102] pod "metrics-server-74d5856cc6-frk5n" in "kube-system" namespace has status "Ready":"False"
	I1127 11:51:06.508735  174671 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fmxzz" in "kube-system" namespace has status "Ready":"False"
	I1127 11:51:08.508972  174671 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fmxzz" in "kube-system" namespace has status "Ready":"False"
	I1127 11:51:06.478682  175460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbpl7" in "kube-system" namespace has status "Ready":"False"
	I1127 11:51:08.486052  175460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbpl7" in "kube-system" namespace has status "Ready":"False"
	I1127 11:51:06.754945  175050 pod_ready.go:102] pod "metrics-server-74d5856cc6-frk5n" in "kube-system" namespace has status "Ready":"False"
	I1127 11:51:08.755894  175050 pod_ready.go:102] pod "metrics-server-74d5856cc6-frk5n" in "kube-system" namespace has status "Ready":"False"
	I1127 11:51:11.255944  175050 pod_ready.go:102] pod "metrics-server-74d5856cc6-frk5n" in "kube-system" namespace has status "Ready":"False"
	I1127 11:51:11.008506  174671 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fmxzz" in "kube-system" namespace has status "Ready":"False"
	I1127 11:51:13.008634  174671 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fmxzz" in "kube-system" namespace has status "Ready":"False"
	I1127 11:51:10.978992  175460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbpl7" in "kube-system" namespace has status "Ready":"False"
	I1127 11:51:12.980057  175460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbpl7" in "kube-system" namespace has status "Ready":"False"
	I1127 11:51:13.755458  175050 pod_ready.go:102] pod "metrics-server-74d5856cc6-frk5n" in "kube-system" namespace has status "Ready":"False"
	I1127 11:51:15.755775  175050 pod_ready.go:102] pod "metrics-server-74d5856cc6-frk5n" in "kube-system" namespace has status "Ready":"False"
	I1127 11:51:15.509753  174671 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fmxzz" in "kube-system" namespace has status "Ready":"False"
	I1127 11:51:18.007992  174671 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fmxzz" in "kube-system" namespace has status "Ready":"False"
	I1127 11:51:15.478441  175460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbpl7" in "kube-system" namespace has status "Ready":"False"
	I1127 11:51:17.479276  175460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbpl7" in "kube-system" namespace has status "Ready":"False"
	I1127 11:51:19.981637  175460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbpl7" in "kube-system" namespace has status "Ready":"False"
	I1127 11:51:18.254462  175050 pod_ready.go:102] pod "metrics-server-74d5856cc6-frk5n" in "kube-system" namespace has status "Ready":"False"
	I1127 11:51:20.256045  175050 pod_ready.go:102] pod "metrics-server-74d5856cc6-frk5n" in "kube-system" namespace has status "Ready":"False"
	I1127 11:51:20.509885  174671 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fmxzz" in "kube-system" namespace has status "Ready":"False"
	I1127 11:51:23.010971  174671 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fmxzz" in "kube-system" namespace has status "Ready":"False"
	I1127 11:51:21.981704  175460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbpl7" in "kube-system" namespace has status "Ready":"False"
	I1127 11:51:24.477258  175460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbpl7" in "kube-system" namespace has status "Ready":"False"
	I1127 11:51:22.755990  175050 pod_ready.go:102] pod "metrics-server-74d5856cc6-frk5n" in "kube-system" namespace has status "Ready":"False"
	I1127 11:51:25.255153  175050 pod_ready.go:102] pod "metrics-server-74d5856cc6-frk5n" in "kube-system" namespace has status "Ready":"False"
	I1127 11:51:25.508308  174671 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fmxzz" in "kube-system" namespace has status "Ready":"False"
	I1127 11:51:28.008229  174671 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fmxzz" in "kube-system" namespace has status "Ready":"False"
	I1127 11:51:30.008786  174671 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fmxzz" in "kube-system" namespace has status "Ready":"False"
	I1127 11:51:26.983114  175460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbpl7" in "kube-system" namespace has status "Ready":"False"
	I1127 11:51:29.476446  175460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbpl7" in "kube-system" namespace has status "Ready":"False"
	I1127 11:51:27.256644  175050 pod_ready.go:102] pod "metrics-server-74d5856cc6-frk5n" in "kube-system" namespace has status "Ready":"False"
	I1127 11:51:29.755593  175050 pod_ready.go:102] pod "metrics-server-74d5856cc6-frk5n" in "kube-system" namespace has status "Ready":"False"
	I1127 11:51:32.507927  174671 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fmxzz" in "kube-system" namespace has status "Ready":"False"
	I1127 11:51:35.009588  174671 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fmxzz" in "kube-system" namespace has status "Ready":"False"
	I1127 11:51:31.478596  175460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbpl7" in "kube-system" namespace has status "Ready":"False"
	I1127 11:51:33.977667  175460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbpl7" in "kube-system" namespace has status "Ready":"False"
	I1127 11:51:31.756543  175050 pod_ready.go:102] pod "metrics-server-74d5856cc6-frk5n" in "kube-system" namespace has status "Ready":"False"
	I1127 11:51:34.256730  175050 pod_ready.go:102] pod "metrics-server-74d5856cc6-frk5n" in "kube-system" namespace has status "Ready":"False"
	I1127 11:51:37.508050  174671 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fmxzz" in "kube-system" namespace has status "Ready":"False"
	I1127 11:51:39.509088  174671 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fmxzz" in "kube-system" namespace has status "Ready":"False"
	I1127 11:51:36.483199  175460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbpl7" in "kube-system" namespace has status "Ready":"False"
	I1127 11:51:38.978067  175460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbpl7" in "kube-system" namespace has status "Ready":"False"
	I1127 11:51:36.755536  175050 pod_ready.go:102] pod "metrics-server-74d5856cc6-frk5n" in "kube-system" namespace has status "Ready":"False"
	I1127 11:51:38.755942  175050 pod_ready.go:102] pod "metrics-server-74d5856cc6-frk5n" in "kube-system" namespace has status "Ready":"False"
	I1127 11:51:41.255274  175050 pod_ready.go:102] pod "metrics-server-74d5856cc6-frk5n" in "kube-system" namespace has status "Ready":"False"
	I1127 11:51:42.010036  174671 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fmxzz" in "kube-system" namespace has status "Ready":"False"
	I1127 11:51:44.509241  174671 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fmxzz" in "kube-system" namespace has status "Ready":"False"
	I1127 11:51:41.478943  175460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbpl7" in "kube-system" namespace has status "Ready":"False"
	I1127 11:51:43.976973  175460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbpl7" in "kube-system" namespace has status "Ready":"False"
	I1127 11:51:43.255484  175050 pod_ready.go:102] pod "metrics-server-74d5856cc6-frk5n" in "kube-system" namespace has status "Ready":"False"
	I1127 11:51:45.255745  175050 pod_ready.go:102] pod "metrics-server-74d5856cc6-frk5n" in "kube-system" namespace has status "Ready":"False"
	I1127 11:51:47.008926  174671 pod_ready.go:102] pod "metrics-server-57f55c9bc5-fmxzz" in "kube-system" namespace has status "Ready":"False"
	I1127 11:51:48.700352  174671 pod_ready.go:81] duration metric: took 4m0.000755856s waiting for pod "metrics-server-57f55c9bc5-fmxzz" in "kube-system" namespace to be "Ready" ...
	E1127 11:51:48.700385  174671 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1127 11:51:48.700407  174671 pod_ready.go:38] duration metric: took 4m10.925552041s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1127 11:51:48.700436  174671 kubeadm.go:640] restartCluster took 4m30.224455273s
	W1127 11:51:48.700505  174671 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1127 11:51:48.700536  174671 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1127 11:51:45.977210  175460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbpl7" in "kube-system" namespace has status "Ready":"False"
	I1127 11:51:47.979056  175460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbpl7" in "kube-system" namespace has status "Ready":"False"
	I1127 11:51:47.756976  175050 pod_ready.go:102] pod "metrics-server-74d5856cc6-frk5n" in "kube-system" namespace has status "Ready":"False"
	I1127 11:51:50.264206  175050 pod_ready.go:102] pod "metrics-server-74d5856cc6-frk5n" in "kube-system" namespace has status "Ready":"False"
	I1127 11:51:50.477386  175460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbpl7" in "kube-system" namespace has status "Ready":"False"
	I1127 11:51:52.477890  175460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbpl7" in "kube-system" namespace has status "Ready":"False"
	I1127 11:51:54.479276  175460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbpl7" in "kube-system" namespace has status "Ready":"False"
	I1127 11:51:52.755522  175050 pod_ready.go:102] pod "metrics-server-74d5856cc6-frk5n" in "kube-system" namespace has status "Ready":"False"
	I1127 11:51:54.756023  175050 pod_ready.go:102] pod "metrics-server-74d5856cc6-frk5n" in "kube-system" namespace has status "Ready":"False"
	I1127 11:51:57.800072  174671 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (9.099510404s)
	I1127 11:51:57.800146  174671 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1127 11:51:57.814538  174671 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1127 11:51:57.824407  174671 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1127 11:51:57.834053  174671 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1127 11:51:57.834091  174671 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1127 11:51:58.062784  174671 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1127 11:51:56.978750  175460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbpl7" in "kube-system" namespace has status "Ready":"False"
	I1127 11:51:59.476694  175460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbpl7" in "kube-system" namespace has status "Ready":"False"
	I1127 11:51:57.256914  175050 pod_ready.go:102] pod "metrics-server-74d5856cc6-frk5n" in "kube-system" namespace has status "Ready":"False"
	I1127 11:51:59.755782  175050 pod_ready.go:102] pod "metrics-server-74d5856cc6-frk5n" in "kube-system" namespace has status "Ready":"False"
	I1127 11:52:01.476779  175460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbpl7" in "kube-system" namespace has status "Ready":"False"
	I1127 11:52:03.477176  175460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbpl7" in "kube-system" namespace has status "Ready":"False"
	I1127 11:52:01.756822  175050 pod_ready.go:102] pod "metrics-server-74d5856cc6-frk5n" in "kube-system" namespace has status "Ready":"False"
	I1127 11:52:04.256376  175050 pod_ready.go:102] pod "metrics-server-74d5856cc6-frk5n" in "kube-system" namespace has status "Ready":"False"
	I1127 11:52:07.642536  174671 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I1127 11:52:07.642616  174671 kubeadm.go:322] [preflight] Running pre-flight checks
	I1127 11:52:07.642742  174671 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1127 11:52:07.642877  174671 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1127 11:52:07.643011  174671 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1127 11:52:07.643120  174671 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1127 11:52:07.644722  174671 out.go:204]   - Generating certificates and keys ...
	I1127 11:52:07.644816  174671 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1127 11:52:07.644896  174671 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1127 11:52:07.644990  174671 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1127 11:52:07.645069  174671 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1127 11:52:07.645164  174671 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1127 11:52:07.645234  174671 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1127 11:52:07.645302  174671 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1127 11:52:07.645366  174671 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1127 11:52:07.645473  174671 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1127 11:52:07.645566  174671 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1127 11:52:07.645615  174671 kubeadm.go:322] [certs] Using the existing "sa" key
	I1127 11:52:07.645699  174671 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1127 11:52:07.645780  174671 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1127 11:52:07.645847  174671 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1127 11:52:07.645924  174671 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1127 11:52:07.645983  174671 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1127 11:52:07.646064  174671 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1127 11:52:07.646129  174671 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1127 11:52:07.647580  174671 out.go:204]   - Booting up control plane ...
	I1127 11:52:07.647670  174671 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1127 11:52:07.647746  174671 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1127 11:52:07.647825  174671 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1127 11:52:07.647946  174671 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1127 11:52:07.648060  174671 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1127 11:52:07.648119  174671 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1127 11:52:07.648314  174671 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1127 11:52:07.648396  174671 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.005513 seconds
	I1127 11:52:07.648531  174671 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1127 11:52:07.648674  174671 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1127 11:52:07.648766  174671 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1127 11:52:07.649016  174671 kubeadm.go:322] [mark-control-plane] Marking the node no-preload-822966 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1127 11:52:07.649095  174671 kubeadm.go:322] [bootstrap-token] Using token: p9aqhy.0a0cpxr9s7n85i6q
	I1127 11:52:07.650582  174671 out.go:204]   - Configuring RBAC rules ...
	I1127 11:52:07.650724  174671 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1127 11:52:07.650821  174671 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1127 11:52:07.650970  174671 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1127 11:52:07.651140  174671 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1127 11:52:07.651315  174671 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1127 11:52:07.651452  174671 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1127 11:52:07.651613  174671 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1127 11:52:07.651692  174671 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1127 11:52:07.651759  174671 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1127 11:52:07.651768  174671 kubeadm.go:322] 
	I1127 11:52:07.651836  174671 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1127 11:52:07.651847  174671 kubeadm.go:322] 
	I1127 11:52:07.651962  174671 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1127 11:52:07.651973  174671 kubeadm.go:322] 
	I1127 11:52:07.652010  174671 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1127 11:52:07.652116  174671 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1127 11:52:07.652197  174671 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1127 11:52:07.652210  174671 kubeadm.go:322] 
	I1127 11:52:07.652282  174671 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1127 11:52:07.652291  174671 kubeadm.go:322] 
	I1127 11:52:07.652354  174671 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1127 11:52:07.652364  174671 kubeadm.go:322] 
	I1127 11:52:07.652436  174671 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1127 11:52:07.652531  174671 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1127 11:52:07.652633  174671 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1127 11:52:07.652645  174671 kubeadm.go:322] 
	I1127 11:52:07.652743  174671 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1127 11:52:07.652855  174671 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1127 11:52:07.652866  174671 kubeadm.go:322] 
	I1127 11:52:07.652969  174671 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token p9aqhy.0a0cpxr9s7n85i6q \
	I1127 11:52:07.653103  174671 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:fd8fee4179dfb986d324014921cfe97120e18a553951f83c01934cca0b94aeef \
	I1127 11:52:07.653136  174671 kubeadm.go:322] 	--control-plane 
	I1127 11:52:07.653145  174671 kubeadm.go:322] 
	I1127 11:52:07.653251  174671 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1127 11:52:07.653261  174671 kubeadm.go:322] 
	I1127 11:52:07.653357  174671 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token p9aqhy.0a0cpxr9s7n85i6q \
	I1127 11:52:07.653490  174671 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:fd8fee4179dfb986d324014921cfe97120e18a553951f83c01934cca0b94aeef 
	I1127 11:52:07.653503  174671 cni.go:84] Creating CNI manager for ""
	I1127 11:52:07.653517  174671 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1127 11:52:07.654954  174671 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1127 11:52:07.656298  174671 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1127 11:52:07.679893  174671 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1127 11:52:07.717153  174671 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1127 11:52:07.717212  174671 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:52:07.717254  174671 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=81390b5609e7feb2151fde4633273d04eb05a21f minikube.k8s.io/name=no-preload-822966 minikube.k8s.io/updated_at=2023_11_27T11_52_07_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:52:07.807073  174671 ops.go:34] apiserver oom_adj: -16
	I1127 11:52:08.239090  174671 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:52:08.347965  174671 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:52:08.955914  174671 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:52:09.455243  174671 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:52:09.955223  174671 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:52:05.976840  175460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbpl7" in "kube-system" namespace has status "Ready":"False"
	I1127 11:52:07.976973  175460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbpl7" in "kube-system" namespace has status "Ready":"False"
	I1127 11:52:09.979612  175460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbpl7" in "kube-system" namespace has status "Ready":"False"
	I1127 11:52:06.755195  175050 pod_ready.go:102] pod "metrics-server-74d5856cc6-frk5n" in "kube-system" namespace has status "Ready":"False"
	I1127 11:52:09.254734  175050 pod_ready.go:102] pod "metrics-server-74d5856cc6-frk5n" in "kube-system" namespace has status "Ready":"False"
	I1127 11:52:11.255392  175050 pod_ready.go:102] pod "metrics-server-74d5856cc6-frk5n" in "kube-system" namespace has status "Ready":"False"
	I1127 11:52:10.455863  174671 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:52:10.955315  174671 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:52:11.455750  174671 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:52:11.955791  174671 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:52:12.456019  174671 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:52:12.955449  174671 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:52:13.455963  174671 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:52:13.955882  174671 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:52:14.455310  174671 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:52:14.955244  174671 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:52:11.980390  175460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbpl7" in "kube-system" namespace has status "Ready":"False"
	I1127 11:52:14.478047  175460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbpl7" in "kube-system" namespace has status "Ready":"False"
	I1127 11:52:13.257112  175050 pod_ready.go:102] pod "metrics-server-74d5856cc6-frk5n" in "kube-system" namespace has status "Ready":"False"
	I1127 11:52:15.755025  175050 pod_ready.go:102] pod "metrics-server-74d5856cc6-frk5n" in "kube-system" namespace has status "Ready":"False"
	I1127 11:52:15.455805  174671 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:52:15.955306  174671 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:52:16.455296  174671 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:52:16.955682  174671 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:52:17.455407  174671 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:52:17.955751  174671 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:52:18.455335  174671 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:52:18.955315  174671 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:52:19.455276  174671 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:52:19.955207  174671 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:52:16.478844  175460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbpl7" in "kube-system" namespace has status "Ready":"False"
	I1127 11:52:18.978304  175460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbpl7" in "kube-system" namespace has status "Ready":"False"
	I1127 11:52:20.456150  174671 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:52:20.624027  174671 kubeadm.go:1081] duration metric: took 12.906855559s to wait for elevateKubeSystemPrivileges.
	I1127 11:52:20.624062  174671 kubeadm.go:406] StartCluster complete in 5m2.183574028s
	I1127 11:52:20.624088  174671 settings.go:142] acquiring lock: {Name:mk0bde143fb6a5b453a36dab2e4269e4e489beea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 11:52:20.624200  174671 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17644-122411/kubeconfig
	I1127 11:52:20.625523  174671 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17644-122411/kubeconfig: {Name:mk165b6db416838b8311934f21a494f4c2865dd3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 11:52:20.625776  174671 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1127 11:52:20.625945  174671 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1127 11:52:20.626027  174671 addons.go:69] Setting storage-provisioner=true in profile "no-preload-822966"
	I1127 11:52:20.626045  174671 addons.go:231] Setting addon storage-provisioner=true in "no-preload-822966"
	I1127 11:52:20.626046  174671 addons.go:69] Setting default-storageclass=true in profile "no-preload-822966"
	I1127 11:52:20.626056  174671 config.go:182] Loaded profile config "no-preload-822966": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1127 11:52:20.626060  174671 addons.go:69] Setting dashboard=true in profile "no-preload-822966"
	I1127 11:52:20.626070  174671 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-822966"
	I1127 11:52:20.626072  174671 addons.go:231] Setting addon dashboard=true in "no-preload-822966"
	I1127 11:52:20.626152  174671 cache.go:107] acquiring lock: {Name:mk395a86368ef8d463afdafe89a54fa575ce50bd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	W1127 11:52:20.626182  174671 addons.go:240] addon dashboard should already be in state true
	I1127 11:52:20.626220  174671 cache.go:115] /home/jenkins/minikube-integration/17644-122411/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2 exists
	I1127 11:52:20.626234  174671 cache.go:96] cache image "gcr.io/k8s-minikube/gvisor-addon:2" -> "/home/jenkins/minikube-integration/17644-122411/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2" took 90.83µs
	I1127 11:52:20.626242  174671 host.go:66] Checking if "no-preload-822966" exists ...
	W1127 11:52:20.626053  174671 addons.go:240] addon storage-provisioner should already be in state true
	I1127 11:52:20.626321  174671 host.go:66] Checking if "no-preload-822966" exists ...
	I1127 11:52:20.626245  174671 cache.go:80] save to tar file gcr.io/k8s-minikube/gvisor-addon:2 -> /home/jenkins/minikube-integration/17644-122411/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2 succeeded
	I1127 11:52:20.626366  174671 cache.go:87] Successfully saved all images to host disk.
	I1127 11:52:20.626533  174671 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1127 11:52:20.626540  174671 config.go:182] Loaded profile config "no-preload-822966": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1127 11:52:20.626575  174671 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 11:52:20.626656  174671 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1127 11:52:20.626072  174671 addons.go:69] Setting metrics-server=true in profile "no-preload-822966"
	I1127 11:52:20.626678  174671 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 11:52:20.626679  174671 addons.go:231] Setting addon metrics-server=true in "no-preload-822966"
	W1127 11:52:20.626690  174671 addons.go:240] addon metrics-server should already be in state true
	I1127 11:52:20.626814  174671 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1127 11:52:20.626841  174671 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 11:52:20.626860  174671 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1127 11:52:20.626866  174671 host.go:66] Checking if "no-preload-822966" exists ...
	I1127 11:52:20.626883  174671 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 11:52:20.627211  174671 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1127 11:52:20.627240  174671 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 11:52:20.645923  174671 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34009
	I1127 11:52:20.645964  174671 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45571
	I1127 11:52:20.646125  174671 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41615
	I1127 11:52:20.646403  174671 main.go:141] libmachine: () Calling .GetVersion
	I1127 11:52:20.646487  174671 main.go:141] libmachine: () Calling .GetVersion
	I1127 11:52:20.646512  174671 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40403
	I1127 11:52:20.646581  174671 main.go:141] libmachine: () Calling .GetVersion
	I1127 11:52:20.646896  174671 main.go:141] libmachine: Using API Version  1
	I1127 11:52:20.646916  174671 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 11:52:20.646977  174671 main.go:141] libmachine: () Calling .GetVersion
	I1127 11:52:20.647041  174671 main.go:141] libmachine: Using API Version  1
	I1127 11:52:20.647065  174671 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 11:52:20.647294  174671 main.go:141] libmachine: () Calling .GetMachineName
	I1127 11:52:20.647432  174671 main.go:141] libmachine: Using API Version  1
	I1127 11:52:20.647458  174671 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 11:52:20.647516  174671 main.go:141] libmachine: () Calling .GetMachineName
	I1127 11:52:20.647777  174671 main.go:141] libmachine: () Calling .GetMachineName
	I1127 11:52:20.647819  174671 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1127 11:52:20.647863  174671 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 11:52:20.647997  174671 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1127 11:52:20.648021  174671 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 11:52:20.648272  174671 main.go:141] libmachine: Using API Version  1
	I1127 11:52:20.648307  174671 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1127 11:52:20.648339  174671 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 11:52:20.648351  174671 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 11:52:20.648706  174671 main.go:141] libmachine: () Calling .GetMachineName
	I1127 11:52:20.648890  174671 main.go:141] libmachine: (no-preload-822966) Calling .GetState
	I1127 11:52:20.649479  174671 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35401
	I1127 11:52:20.650068  174671 main.go:141] libmachine: () Calling .GetVersion
	I1127 11:52:20.650797  174671 main.go:141] libmachine: Using API Version  1
	I1127 11:52:20.650815  174671 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 11:52:20.651259  174671 main.go:141] libmachine: () Calling .GetMachineName
	I1127 11:52:20.651936  174671 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1127 11:52:20.651966  174671 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 11:52:20.652240  174671 main.go:141] libmachine: (no-preload-822966) Calling .GetState
	I1127 11:52:20.655374  174671 addons.go:231] Setting addon default-storageclass=true in "no-preload-822966"
	W1127 11:52:20.655393  174671 addons.go:240] addon default-storageclass should already be in state true
	I1127 11:52:20.655424  174671 host.go:66] Checking if "no-preload-822966" exists ...
	I1127 11:52:20.655794  174671 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1127 11:52:20.655828  174671 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 11:52:20.667319  174671 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36891
	I1127 11:52:20.667905  174671 main.go:141] libmachine: () Calling .GetVersion
	I1127 11:52:20.668206  174671 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36061
	I1127 11:52:20.668453  174671 main.go:141] libmachine: Using API Version  1
	I1127 11:52:20.668479  174671 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 11:52:20.668888  174671 main.go:141] libmachine: () Calling .GetMachineName
	I1127 11:52:20.669012  174671 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37345
	I1127 11:52:20.669152  174671 main.go:141] libmachine: (no-preload-822966) Calling .GetState
	I1127 11:52:20.669275  174671 main.go:141] libmachine: () Calling .GetVersion
	I1127 11:52:20.669332  174671 main.go:141] libmachine: () Calling .GetVersion
	I1127 11:52:20.669701  174671 main.go:141] libmachine: Using API Version  1
	I1127 11:52:20.669715  174671 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 11:52:20.669912  174671 main.go:141] libmachine: Using API Version  1
	I1127 11:52:20.669924  174671 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 11:52:20.670026  174671 main.go:141] libmachine: () Calling .GetMachineName
	I1127 11:52:20.670165  174671 main.go:141] libmachine: (no-preload-822966) Calling .GetState
	I1127 11:52:20.670400  174671 main.go:141] libmachine: () Calling .GetMachineName
	I1127 11:52:20.670939  174671 main.go:141] libmachine: (no-preload-822966) Calling .DriverName
	I1127 11:52:20.671104  174671 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1127 11:52:20.671121  174671 main.go:141] libmachine: (no-preload-822966) Calling .GetSSHHostname
	I1127 11:52:20.671311  174671 main.go:141] libmachine: (no-preload-822966) Calling .DriverName
	I1127 11:52:20.673485  174671 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1127 11:52:20.674980  174671 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1127 11:52:20.674998  174671 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1127 11:52:20.675018  174671 main.go:141] libmachine: (no-preload-822966) Calling .GetSSHHostname
	I1127 11:52:20.672845  174671 main.go:141] libmachine: (no-preload-822966) Calling .DriverName
	I1127 11:52:20.675129  174671 main.go:141] libmachine: (no-preload-822966) DBG | domain no-preload-822966 has defined MAC address 52:54:00:6a:5a:98 in network mk-no-preload-822966
	I1127 11:52:20.677182  174671 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I1127 11:52:20.675410  174671 main.go:141] libmachine: (no-preload-822966) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:5a:98", ip: ""} in network mk-no-preload-822966: {Iface:virbr1 ExpiryTime:2023-11-27 12:47:03 +0000 UTC Type:0 Mac:52:54:00:6a:5a:98 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:no-preload-822966 Clientid:01:52:54:00:6a:5a:98}
	I1127 11:52:20.675866  174671 main.go:141] libmachine: (no-preload-822966) Calling .GetSSHPort
	I1127 11:52:20.680635  174671 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1127 11:52:20.679298  174671 main.go:141] libmachine: (no-preload-822966) DBG | domain no-preload-822966 has defined IP address 192.168.39.84 and MAC address 52:54:00:6a:5a:98 in network mk-no-preload-822966
	I1127 11:52:20.679430  174671 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38105
	I1127 11:52:20.679454  174671 main.go:141] libmachine: (no-preload-822966) DBG | domain no-preload-822966 has defined MAC address 52:54:00:6a:5a:98 in network mk-no-preload-822966
	I1127 11:52:20.679615  174671 main.go:141] libmachine: (no-preload-822966) Calling .GetSSHKeyPath
	I1127 11:52:20.680156  174671 main.go:141] libmachine: (no-preload-822966) Calling .GetSSHPort
	I1127 11:52:20.680170  174671 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40657
	I1127 11:52:20.680913  174671 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-822966" context rescaled to 1 replicas
	I1127 11:52:20.682018  174671 addons.go:423] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1127 11:52:20.682031  174671 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1127 11:52:20.682019  174671 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.84 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1127 11:52:20.682045  174671 main.go:141] libmachine: (no-preload-822966) Calling .GetSSHHostname
	I1127 11:52:20.683263  174671 out.go:177] * Verifying Kubernetes components...
	I1127 11:52:20.682096  174671 main.go:141] libmachine: (no-preload-822966) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:5a:98", ip: ""} in network mk-no-preload-822966: {Iface:virbr1 ExpiryTime:2023-11-27 12:47:03 +0000 UTC Type:0 Mac:52:54:00:6a:5a:98 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:no-preload-822966 Clientid:01:52:54:00:6a:5a:98}
	I1127 11:52:20.682543  174671 main.go:141] libmachine: (no-preload-822966) Calling .GetSSHKeyPath
	I1127 11:52:20.682571  174671 main.go:141] libmachine: (no-preload-822966) Calling .GetSSHUsername
	I1127 11:52:20.682756  174671 main.go:141] libmachine: () Calling .GetVersion
	I1127 11:52:20.683017  174671 main.go:141] libmachine: () Calling .GetVersion
	I1127 11:52:20.684677  174671 main.go:141] libmachine: (no-preload-822966) DBG | domain no-preload-822966 has defined IP address 192.168.39.84 and MAC address 52:54:00:6a:5a:98 in network mk-no-preload-822966
	I1127 11:52:20.684715  174671 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1127 11:52:20.684866  174671 main.go:141] libmachine: (no-preload-822966) Calling .GetSSHUsername
	I1127 11:52:20.684925  174671 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17644-122411/.minikube/machines/no-preload-822966/id_rsa Username:docker}
	I1127 11:52:20.685201  174671 main.go:141] libmachine: Using API Version  1
	I1127 11:52:20.685216  174671 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 11:52:20.685282  174671 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17644-122411/.minikube/machines/no-preload-822966/id_rsa Username:docker}
	I1127 11:52:20.685414  174671 main.go:141] libmachine: Using API Version  1
	I1127 11:52:20.685431  174671 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 11:52:20.685513  174671 main.go:141] libmachine: () Calling .GetMachineName
	I1127 11:52:20.685614  174671 main.go:141] libmachine: (no-preload-822966) DBG | domain no-preload-822966 has defined MAC address 52:54:00:6a:5a:98 in network mk-no-preload-822966
	I1127 11:52:20.685699  174671 main.go:141] libmachine: (no-preload-822966) Calling .GetState
	I1127 11:52:20.685892  174671 main.go:141] libmachine: () Calling .GetMachineName
	I1127 11:52:20.686195  174671 main.go:141] libmachine: (no-preload-822966) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:5a:98", ip: ""} in network mk-no-preload-822966: {Iface:virbr1 ExpiryTime:2023-11-27 12:47:03 +0000 UTC Type:0 Mac:52:54:00:6a:5a:98 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:no-preload-822966 Clientid:01:52:54:00:6a:5a:98}
	I1127 11:52:20.686216  174671 main.go:141] libmachine: (no-preload-822966) DBG | domain no-preload-822966 has defined IP address 192.168.39.84 and MAC address 52:54:00:6a:5a:98 in network mk-no-preload-822966
	I1127 11:52:20.686471  174671 main.go:141] libmachine: (no-preload-822966) Calling .GetSSHPort
	I1127 11:52:20.686567  174671 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1127 11:52:20.686595  174671 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 11:52:20.686767  174671 main.go:141] libmachine: (no-preload-822966) Calling .GetSSHKeyPath
	I1127 11:52:20.686974  174671 main.go:141] libmachine: (no-preload-822966) Calling .GetSSHUsername
	I1127 11:52:20.687252  174671 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17644-122411/.minikube/machines/no-preload-822966/id_rsa Username:docker}
	I1127 11:52:20.687311  174671 main.go:141] libmachine: (no-preload-822966) Calling .DriverName
	I1127 11:52:20.688932  174671 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1127 11:52:18.254705  175050 pod_ready.go:102] pod "metrics-server-74d5856cc6-frk5n" in "kube-system" namespace has status "Ready":"False"
	I1127 11:52:20.255731  175050 pod_ready.go:102] pod "metrics-server-74d5856cc6-frk5n" in "kube-system" namespace has status "Ready":"False"
	I1127 11:52:20.690239  174671 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1127 11:52:20.690256  174671 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1127 11:52:20.690272  174671 main.go:141] libmachine: (no-preload-822966) Calling .GetSSHHostname
	I1127 11:52:20.693085  174671 main.go:141] libmachine: (no-preload-822966) DBG | domain no-preload-822966 has defined MAC address 52:54:00:6a:5a:98 in network mk-no-preload-822966
	I1127 11:52:20.693480  174671 main.go:141] libmachine: (no-preload-822966) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:5a:98", ip: ""} in network mk-no-preload-822966: {Iface:virbr1 ExpiryTime:2023-11-27 12:47:03 +0000 UTC Type:0 Mac:52:54:00:6a:5a:98 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:no-preload-822966 Clientid:01:52:54:00:6a:5a:98}
	I1127 11:52:20.693520  174671 main.go:141] libmachine: (no-preload-822966) DBG | domain no-preload-822966 has defined IP address 192.168.39.84 and MAC address 52:54:00:6a:5a:98 in network mk-no-preload-822966
	I1127 11:52:20.693676  174671 main.go:141] libmachine: (no-preload-822966) Calling .GetSSHPort
	I1127 11:52:20.693862  174671 main.go:141] libmachine: (no-preload-822966) Calling .GetSSHKeyPath
	I1127 11:52:20.694028  174671 main.go:141] libmachine: (no-preload-822966) Calling .GetSSHUsername
	I1127 11:52:20.694180  174671 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17644-122411/.minikube/machines/no-preload-822966/id_rsa Username:docker}
	I1127 11:52:20.704126  174671 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35943
	I1127 11:52:20.704555  174671 main.go:141] libmachine: () Calling .GetVersion
	I1127 11:52:20.705015  174671 main.go:141] libmachine: Using API Version  1
	I1127 11:52:20.705042  174671 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 11:52:20.705371  174671 main.go:141] libmachine: () Calling .GetMachineName
	I1127 11:52:20.705553  174671 main.go:141] libmachine: (no-preload-822966) Calling .GetState
	I1127 11:52:20.707292  174671 main.go:141] libmachine: (no-preload-822966) Calling .DriverName
	I1127 11:52:20.707521  174671 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1127 11:52:20.707541  174671 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1127 11:52:20.707558  174671 main.go:141] libmachine: (no-preload-822966) Calling .GetSSHHostname
	I1127 11:52:20.710005  174671 main.go:141] libmachine: (no-preload-822966) DBG | domain no-preload-822966 has defined MAC address 52:54:00:6a:5a:98 in network mk-no-preload-822966
	I1127 11:52:20.710413  174671 main.go:141] libmachine: (no-preload-822966) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:5a:98", ip: ""} in network mk-no-preload-822966: {Iface:virbr1 ExpiryTime:2023-11-27 12:47:03 +0000 UTC Type:0 Mac:52:54:00:6a:5a:98 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:no-preload-822966 Clientid:01:52:54:00:6a:5a:98}
	I1127 11:52:20.710428  174671 main.go:141] libmachine: (no-preload-822966) DBG | domain no-preload-822966 has defined IP address 192.168.39.84 and MAC address 52:54:00:6a:5a:98 in network mk-no-preload-822966
	I1127 11:52:20.710551  174671 main.go:141] libmachine: (no-preload-822966) Calling .GetSSHPort
	I1127 11:52:20.710651  174671 main.go:141] libmachine: (no-preload-822966) Calling .GetSSHKeyPath
	I1127 11:52:20.710818  174671 main.go:141] libmachine: (no-preload-822966) Calling .GetSSHUsername
	I1127 11:52:20.710904  174671 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17644-122411/.minikube/machines/no-preload-822966/id_rsa Username:docker}
	I1127 11:52:20.991987  174671 node_ready.go:35] waiting up to 6m0s for node "no-preload-822966" to be "Ready" ...
	I1127 11:52:20.992026  174671 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1127 11:52:20.995501  174671 node_ready.go:49] node "no-preload-822966" has status "Ready":"True"
	I1127 11:52:20.995532  174671 node_ready.go:38] duration metric: took 3.519977ms waiting for node "no-preload-822966" to be "Ready" ...
	I1127 11:52:20.995546  174671 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1127 11:52:21.001461  174671 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-sd6gn" in "kube-system" namespace to be "Ready" ...
	I1127 11:52:21.010732  174671 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1127 11:52:21.023109  174671 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1127 11:52:21.023126  174671 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1127 11:52:21.073340  174671 addons.go:423] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1127 11:52:21.073372  174671 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1127 11:52:21.137291  174671 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1127 11:52:21.137319  174671 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1127 11:52:21.199053  174671 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1127 11:52:21.250842  174671 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1127 11:52:21.250873  174671 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1127 11:52:21.283569  174671 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1127 11:52:21.317754  174671 addons.go:423] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1127 11:52:21.317791  174671 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1127 11:52:21.476364  174671 addons.go:423] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1127 11:52:21.476390  174671 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1127 11:52:21.581445  174671 addons.go:423] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1127 11:52:21.581467  174671 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1127 11:52:21.738839  174671 addons.go:423] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1127 11:52:21.738873  174671 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1127 11:52:21.783976  174671 addons.go:423] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1127 11:52:21.784010  174671 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1127 11:52:21.856406  174671 addons.go:423] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1127 11:52:21.856433  174671 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1127 11:52:21.883147  174671 addons.go:423] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1127 11:52:21.883185  174671 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1127 11:52:21.900505  174671 addons.go:423] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1127 11:52:21.900534  174671 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1127 11:52:21.922565  174671 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1127 11:52:22.988893  174671 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.996831272s)
	I1127 11:52:22.988916  174671 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.97815302s)
	I1127 11:52:22.988928  174671 start.go:926] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1127 11:52:22.988966  174671 main.go:141] libmachine: Making call to close driver server
	I1127 11:52:22.988978  174671 main.go:141] libmachine: (no-preload-822966) Calling .Close
	I1127 11:52:22.988984  174671 ssh_runner.go:235] Completed: docker images --format {{.Repository}}:{{.Tag}}: (2.317864229s)
	I1127 11:52:22.989006  174671 docker.go:671] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/gvisor-addon:2
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I1127 11:52:22.989020  174671 cache_images.go:84] Images are preloaded, skipping loading
	I1127 11:52:22.989028  174671 cache_images.go:262] succeeded pushing to: no-preload-822966
	I1127 11:52:22.989033  174671 cache_images.go:263] failed pushing to: 
	I1127 11:52:22.989045  174671 main.go:141] libmachine: Making call to close driver server
	I1127 11:52:22.989060  174671 main.go:141] libmachine: (no-preload-822966) Calling .Close
	I1127 11:52:22.989421  174671 main.go:141] libmachine: (no-preload-822966) DBG | Closing plugin on server side
	I1127 11:52:22.989443  174671 main.go:141] libmachine: Successfully made call to close driver server
	I1127 11:52:22.989445  174671 main.go:141] libmachine: Successfully made call to close driver server
	I1127 11:52:22.989457  174671 main.go:141] libmachine: Making call to close connection to plugin binary
	I1127 11:52:22.989463  174671 main.go:141] libmachine: Making call to close connection to plugin binary
	I1127 11:52:22.989467  174671 main.go:141] libmachine: Making call to close driver server
	I1127 11:52:22.989475  174671 main.go:141] libmachine: Making call to close driver server
	I1127 11:52:22.989477  174671 main.go:141] libmachine: (no-preload-822966) Calling .Close
	I1127 11:52:22.989487  174671 main.go:141] libmachine: (no-preload-822966) Calling .Close
	I1127 11:52:22.991231  174671 main.go:141] libmachine: (no-preload-822966) DBG | Closing plugin on server side
	I1127 11:52:22.991256  174671 main.go:141] libmachine: (no-preload-822966) DBG | Closing plugin on server side
	I1127 11:52:22.991280  174671 main.go:141] libmachine: Successfully made call to close driver server
	I1127 11:52:22.991288  174671 main.go:141] libmachine: Making call to close connection to plugin binary
	I1127 11:52:22.991465  174671 main.go:141] libmachine: Successfully made call to close driver server
	I1127 11:52:22.991482  174671 main.go:141] libmachine: Making call to close connection to plugin binary
	I1127 11:52:22.998374  174671 main.go:141] libmachine: Making call to close driver server
	I1127 11:52:22.998392  174671 main.go:141] libmachine: (no-preload-822966) Calling .Close
	I1127 11:52:22.998663  174671 main.go:141] libmachine: (no-preload-822966) DBG | Closing plugin on server side
	I1127 11:52:22.998667  174671 main.go:141] libmachine: Successfully made call to close driver server
	I1127 11:52:22.998681  174671 main.go:141] libmachine: Making call to close connection to plugin binary
	I1127 11:52:23.013849  174671 pod_ready.go:102] pod "coredns-5dd5756b68-sd6gn" in "kube-system" namespace has status "Ready":"False"
	I1127 11:52:23.720424  174671 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.521321818s)
	I1127 11:52:23.720491  174671 main.go:141] libmachine: Making call to close driver server
	I1127 11:52:23.720503  174671 main.go:141] libmachine: (no-preload-822966) Calling .Close
	I1127 11:52:23.720833  174671 main.go:141] libmachine: Successfully made call to close driver server
	I1127 11:52:23.720879  174671 main.go:141] libmachine: Making call to close connection to plugin binary
	I1127 11:52:23.720881  174671 main.go:141] libmachine: (no-preload-822966) DBG | Closing plugin on server side
	I1127 11:52:23.720900  174671 main.go:141] libmachine: Making call to close driver server
	I1127 11:52:23.720911  174671 main.go:141] libmachine: (no-preload-822966) Calling .Close
	I1127 11:52:23.722710  174671 main.go:141] libmachine: (no-preload-822966) DBG | Closing plugin on server side
	I1127 11:52:23.722712  174671 main.go:141] libmachine: Successfully made call to close driver server
	I1127 11:52:23.722738  174671 main.go:141] libmachine: Making call to close connection to plugin binary
	I1127 11:52:24.148648  174671 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.865031736s)
	I1127 11:52:24.148703  174671 main.go:141] libmachine: Making call to close driver server
	I1127 11:52:24.148722  174671 main.go:141] libmachine: (no-preload-822966) Calling .Close
	I1127 11:52:24.149068  174671 main.go:141] libmachine: Successfully made call to close driver server
	I1127 11:52:24.149091  174671 main.go:141] libmachine: Making call to close connection to plugin binary
	I1127 11:52:24.149107  174671 main.go:141] libmachine: Making call to close driver server
	I1127 11:52:24.149117  174671 main.go:141] libmachine: (no-preload-822966) Calling .Close
	I1127 11:52:24.149402  174671 main.go:141] libmachine: Successfully made call to close driver server
	I1127 11:52:24.149428  174671 main.go:141] libmachine: Making call to close connection to plugin binary
	I1127 11:52:24.149440  174671 addons.go:467] Verifying addon metrics-server=true in "no-preload-822966"
	I1127 11:52:24.149408  174671 main.go:141] libmachine: (no-preload-822966) DBG | Closing plugin on server side
	I1127 11:52:24.513588  174671 pod_ready.go:92] pod "coredns-5dd5756b68-sd6gn" in "kube-system" namespace has status "Ready":"True"
	I1127 11:52:24.513614  174671 pod_ready.go:81] duration metric: took 3.512132572s waiting for pod "coredns-5dd5756b68-sd6gn" in "kube-system" namespace to be "Ready" ...
	I1127 11:52:24.513623  174671 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-822966" in "kube-system" namespace to be "Ready" ...
	I1127 11:52:24.519498  174671 pod_ready.go:92] pod "etcd-no-preload-822966" in "kube-system" namespace has status "Ready":"True"
	I1127 11:52:24.519516  174671 pod_ready.go:81] duration metric: took 5.887033ms waiting for pod "etcd-no-preload-822966" in "kube-system" namespace to be "Ready" ...
	I1127 11:52:24.519523  174671 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-822966" in "kube-system" namespace to be "Ready" ...
	I1127 11:52:24.527356  174671 pod_ready.go:92] pod "kube-apiserver-no-preload-822966" in "kube-system" namespace has status "Ready":"True"
	I1127 11:52:24.527378  174671 pod_ready.go:81] duration metric: took 7.847775ms waiting for pod "kube-apiserver-no-preload-822966" in "kube-system" namespace to be "Ready" ...
	I1127 11:52:24.527390  174671 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-822966" in "kube-system" namespace to be "Ready" ...
	I1127 11:52:24.545308  174671 pod_ready.go:92] pod "kube-controller-manager-no-preload-822966" in "kube-system" namespace has status "Ready":"True"
	I1127 11:52:24.545333  174671 pod_ready.go:81] duration metric: took 17.933825ms waiting for pod "kube-controller-manager-no-preload-822966" in "kube-system" namespace to be "Ready" ...
	I1127 11:52:24.545346  174671 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-drsgx" in "kube-system" namespace to be "Ready" ...
	I1127 11:52:24.551465  174671 pod_ready.go:92] pod "kube-proxy-drsgx" in "kube-system" namespace has status "Ready":"True"
	I1127 11:52:24.551481  174671 pod_ready.go:81] duration metric: took 6.128173ms waiting for pod "kube-proxy-drsgx" in "kube-system" namespace to be "Ready" ...
	I1127 11:52:24.551488  174671 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-822966" in "kube-system" namespace to be "Ready" ...
	I1127 11:52:24.928238  174671 pod_ready.go:92] pod "kube-scheduler-no-preload-822966" in "kube-system" namespace has status "Ready":"True"
	I1127 11:52:24.928262  174671 pod_ready.go:81] duration metric: took 376.767669ms waiting for pod "kube-scheduler-no-preload-822966" in "kube-system" namespace to be "Ready" ...
	I1127 11:52:24.928271  174671 pod_ready.go:38] duration metric: took 3.93271188s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1127 11:52:24.928290  174671 api_server.go:52] waiting for apiserver process to appear ...
	I1127 11:52:24.928349  174671 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1127 11:52:25.039823  174671 api_server.go:72] duration metric: took 4.357760699s to wait for apiserver process to appear ...
	I1127 11:52:25.039855  174671 api_server.go:88] waiting for apiserver healthz status ...
	I1127 11:52:25.039869  174671 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (3.117263208s)
	I1127 11:52:25.039873  174671 api_server.go:253] Checking apiserver healthz at https://192.168.39.84:8443/healthz ...
	I1127 11:52:25.039932  174671 main.go:141] libmachine: Making call to close driver server
	I1127 11:52:25.039972  174671 main.go:141] libmachine: (no-preload-822966) Calling .Close
	I1127 11:52:25.040305  174671 main.go:141] libmachine: Successfully made call to close driver server
	I1127 11:52:25.040343  174671 main.go:141] libmachine: (no-preload-822966) DBG | Closing plugin on server side
	I1127 11:52:25.040354  174671 main.go:141] libmachine: Making call to close connection to plugin binary
	I1127 11:52:25.040375  174671 main.go:141] libmachine: Making call to close driver server
	I1127 11:52:25.040386  174671 main.go:141] libmachine: (no-preload-822966) Calling .Close
	I1127 11:52:25.040636  174671 main.go:141] libmachine: Successfully made call to close driver server
	I1127 11:52:25.040652  174671 main.go:141] libmachine: Making call to close connection to plugin binary
	I1127 11:52:25.042576  174671 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-822966 addons enable metrics-server	
	
	
	I1127 11:52:25.044053  174671 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I1127 11:52:25.045416  174671 addons.go:502] enable addons completed in 4.419479892s: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I1127 11:52:25.047249  174671 api_server.go:279] https://192.168.39.84:8443/healthz returned 200:
	ok
	I1127 11:52:25.048853  174671 api_server.go:141] control plane version: v1.28.4
	I1127 11:52:25.048871  174671 api_server.go:131] duration metric: took 9.008717ms to wait for apiserver health ...
	I1127 11:52:25.048880  174671 system_pods.go:43] waiting for kube-system pods to appear ...
	I1127 11:52:25.115466  174671 system_pods.go:59] 8 kube-system pods found
	I1127 11:52:25.115498  174671 system_pods.go:61] "coredns-5dd5756b68-sd6gn" [d0b99496-01b1-45e0-a513-f391591c9948] Running
	I1127 11:52:25.115508  174671 system_pods.go:61] "etcd-no-preload-822966" [82a50cc8-0b03-471d-97d2-6375fdc9312f] Running
	I1127 11:52:25.115515  174671 system_pods.go:61] "kube-apiserver-no-preload-822966" [8fc950b5-79a6-4bf0-baaf-a93d4039d91d] Running
	I1127 11:52:25.115521  174671 system_pods.go:61] "kube-controller-manager-no-preload-822966" [bd8e96a0-bbf5-48e5-aac8-e36c1d0ca19d] Running
	I1127 11:52:25.115527  174671 system_pods.go:61] "kube-proxy-drsgx" [ac18ae35-06a7-4613-aa71-917c41518111] Running
	I1127 11:52:25.115539  174671 system_pods.go:61] "kube-scheduler-no-preload-822966" [252f06a4-44df-435b-94e0-3abeb1e34dd6] Running
	I1127 11:52:25.115552  174671 system_pods.go:61] "metrics-server-57f55c9bc5-srmxv" [00813c29-2f8d-47e5-8751-b8d88f7aa33d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1127 11:52:25.115563  174671 system_pods.go:61] "storage-provisioner" [f9b69d62-0ddf-4fae-a637-7512daad90b4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1127 11:52:25.115579  174671 system_pods.go:74] duration metric: took 66.691308ms to wait for pod list to return data ...
	I1127 11:52:25.115592  174671 default_sa.go:34] waiting for default service account to be created ...
	I1127 11:52:20.980050  175460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbpl7" in "kube-system" namespace has status "Ready":"False"
	I1127 11:52:23.478091  175460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbpl7" in "kube-system" namespace has status "Ready":"False"
	I1127 11:52:25.311790  174671 default_sa.go:45] found service account: "default"
	I1127 11:52:25.311826  174671 default_sa.go:55] duration metric: took 196.222126ms for default service account to be created ...
	I1127 11:52:25.311837  174671 system_pods.go:116] waiting for k8s-apps to be running ...
	I1127 11:52:25.514585  174671 system_pods.go:86] 8 kube-system pods found
	I1127 11:52:25.514614  174671 system_pods.go:89] "coredns-5dd5756b68-sd6gn" [d0b99496-01b1-45e0-a513-f391591c9948] Running
	I1127 11:52:25.514619  174671 system_pods.go:89] "etcd-no-preload-822966" [82a50cc8-0b03-471d-97d2-6375fdc9312f] Running
	I1127 11:52:25.514623  174671 system_pods.go:89] "kube-apiserver-no-preload-822966" [8fc950b5-79a6-4bf0-baaf-a93d4039d91d] Running
	I1127 11:52:25.514628  174671 system_pods.go:89] "kube-controller-manager-no-preload-822966" [bd8e96a0-bbf5-48e5-aac8-e36c1d0ca19d] Running
	I1127 11:52:25.514632  174671 system_pods.go:89] "kube-proxy-drsgx" [ac18ae35-06a7-4613-aa71-917c41518111] Running
	I1127 11:52:25.514636  174671 system_pods.go:89] "kube-scheduler-no-preload-822966" [252f06a4-44df-435b-94e0-3abeb1e34dd6] Running
	I1127 11:52:25.514642  174671 system_pods.go:89] "metrics-server-57f55c9bc5-srmxv" [00813c29-2f8d-47e5-8751-b8d88f7aa33d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1127 11:52:25.514650  174671 system_pods.go:89] "storage-provisioner" [f9b69d62-0ddf-4fae-a637-7512daad90b4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1127 11:52:25.514661  174671 system_pods.go:126] duration metric: took 202.818801ms to wait for k8s-apps to be running ...
	I1127 11:52:25.514673  174671 system_svc.go:44] waiting for kubelet service to be running ....
	I1127 11:52:25.514719  174671 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1127 11:52:25.541072  174671 system_svc.go:56] duration metric: took 26.386244ms WaitForService to wait for kubelet.
	I1127 11:52:25.541101  174671 kubeadm.go:581] duration metric: took 4.859048959s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1127 11:52:25.541126  174671 node_conditions.go:102] verifying NodePressure condition ...
	I1127 11:52:25.711626  174671 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1127 11:52:25.711673  174671 node_conditions.go:123] node cpu capacity is 2
	I1127 11:52:25.711683  174671 node_conditions.go:105] duration metric: took 170.551715ms to run NodePressure ...
	I1127 11:52:25.711694  174671 start.go:228] waiting for startup goroutines ...
	I1127 11:52:25.711703  174671 start.go:233] waiting for cluster config update ...
	I1127 11:52:25.711715  174671 start.go:242] writing updated cluster config ...
	I1127 11:52:25.712028  174671 ssh_runner.go:195] Run: rm -f paused
	I1127 11:52:25.763864  174671 start.go:600] kubectl: 1.28.4, cluster: 1.28.4 (minor skew: 0)
	I1127 11:52:25.765942  174671 out.go:177] * Done! kubectl is now configured to use "no-preload-822966" cluster and "default" namespace by default
	I1127 11:52:22.257019  175050 pod_ready.go:102] pod "metrics-server-74d5856cc6-frk5n" in "kube-system" namespace has status "Ready":"False"
	I1127 11:52:24.756166  175050 pod_ready.go:102] pod "metrics-server-74d5856cc6-frk5n" in "kube-system" namespace has status "Ready":"False"
	I1127 11:52:25.482083  175460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbpl7" in "kube-system" namespace has status "Ready":"False"
	I1127 11:52:27.977856  175460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbpl7" in "kube-system" namespace has status "Ready":"False"
	I1127 11:52:29.978586  175460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbpl7" in "kube-system" namespace has status "Ready":"False"
	I1127 11:52:27.255726  175050 pod_ready.go:102] pod "metrics-server-74d5856cc6-frk5n" in "kube-system" namespace has status "Ready":"False"
	I1127 11:52:29.258047  175050 pod_ready.go:102] pod "metrics-server-74d5856cc6-frk5n" in "kube-system" namespace has status "Ready":"False"
	I1127 11:52:32.476726  175460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbpl7" in "kube-system" namespace has status "Ready":"False"
	I1127 11:52:34.477358  175460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbpl7" in "kube-system" namespace has status "Ready":"False"
	I1127 11:52:31.759077  175050 pod_ready.go:102] pod "metrics-server-74d5856cc6-frk5n" in "kube-system" namespace has status "Ready":"False"
	I1127 11:52:34.256697  175050 pod_ready.go:102] pod "metrics-server-74d5856cc6-frk5n" in "kube-system" namespace has status "Ready":"False"
	I1127 11:52:36.478167  175460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbpl7" in "kube-system" namespace has status "Ready":"False"
	I1127 11:52:38.478309  175460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbpl7" in "kube-system" namespace has status "Ready":"False"
	I1127 11:52:36.756596  175050 pod_ready.go:102] pod "metrics-server-74d5856cc6-frk5n" in "kube-system" namespace has status "Ready":"False"
	I1127 11:52:39.257098  175050 pod_ready.go:102] pod "metrics-server-74d5856cc6-frk5n" in "kube-system" namespace has status "Ready":"False"
	I1127 11:52:40.976700  175460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbpl7" in "kube-system" namespace has status "Ready":"False"
	I1127 11:52:42.979278  175460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbpl7" in "kube-system" namespace has status "Ready":"False"
	I1127 11:52:41.757070  175050 pod_ready.go:102] pod "metrics-server-74d5856cc6-frk5n" in "kube-system" namespace has status "Ready":"False"
	I1127 11:52:44.255912  175050 pod_ready.go:102] pod "metrics-server-74d5856cc6-frk5n" in "kube-system" namespace has status "Ready":"False"
	I1127 11:52:45.478056  175460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbpl7" in "kube-system" namespace has status "Ready":"False"
	I1127 11:52:47.978767  175460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbpl7" in "kube-system" namespace has status "Ready":"False"
	I1127 11:52:49.981001  175460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbpl7" in "kube-system" namespace has status "Ready":"False"
	I1127 11:52:46.754537  175050 pod_ready.go:102] pod "metrics-server-74d5856cc6-frk5n" in "kube-system" namespace has status "Ready":"False"
	I1127 11:52:48.756714  175050 pod_ready.go:102] pod "metrics-server-74d5856cc6-frk5n" in "kube-system" namespace has status "Ready":"False"
	I1127 11:52:51.254991  175050 pod_ready.go:102] pod "metrics-server-74d5856cc6-frk5n" in "kube-system" namespace has status "Ready":"False"
	I1127 11:52:51.981828  175460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbpl7" in "kube-system" namespace has status "Ready":"False"
	I1127 11:52:54.478519  175460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbpl7" in "kube-system" namespace has status "Ready":"False"
	I1127 11:52:53.755634  175050 pod_ready.go:102] pod "metrics-server-74d5856cc6-frk5n" in "kube-system" namespace has status "Ready":"False"
	I1127 11:52:55.756265  175050 pod_ready.go:102] pod "metrics-server-74d5856cc6-frk5n" in "kube-system" namespace has status "Ready":"False"
	I1127 11:52:56.479982  175460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbpl7" in "kube-system" namespace has status "Ready":"False"
	I1127 11:52:58.977316  175460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbpl7" in "kube-system" namespace has status "Ready":"False"
	I1127 11:52:57.757564  175050 pod_ready.go:102] pod "metrics-server-74d5856cc6-frk5n" in "kube-system" namespace has status "Ready":"False"
	I1127 11:53:00.151984  175050 pod_ready.go:81] duration metric: took 4m0.000227379s waiting for pod "metrics-server-74d5856cc6-frk5n" in "kube-system" namespace to be "Ready" ...
	E1127 11:53:00.152035  175050 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1127 11:53:00.152066  175050 pod_ready.go:38] duration metric: took 4m1.401665261s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1127 11:53:00.152101  175050 kubeadm.go:640] restartCluster took 5m9.614685634s
	W1127 11:53:00.152195  175050 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1127 11:53:00.152230  175050 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I1127 11:53:02.881085  175050 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force": (2.728822292s)
	I1127 11:53:02.881167  175050 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1127 11:53:02.895380  175050 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1127 11:53:02.905137  175050 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1127 11:53:02.914543  175050 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1127 11:53:02.914588  175050 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I1127 11:53:02.970139  175050 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I1127 11:53:02.970227  175050 kubeadm.go:322] [preflight] Running pre-flight checks
	I1127 11:53:03.189123  175050 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1127 11:53:03.189268  175050 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1127 11:53:03.189395  175050 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1127 11:53:03.345044  175050 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1127 11:53:03.346210  175050 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1127 11:53:03.356649  175050 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I1127 11:53:03.476669  175050 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1127 11:53:03.478202  175050 out.go:204]   - Generating certificates and keys ...
	I1127 11:53:03.478366  175050 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1127 11:53:03.478520  175050 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1127 11:53:03.478645  175050 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1127 11:53:03.478746  175050 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1127 11:53:03.478852  175050 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1127 11:53:03.479971  175050 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1127 11:53:03.480619  175050 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1127 11:53:03.481113  175050 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1127 11:53:03.482544  175050 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1127 11:53:03.485709  175050 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1127 11:53:03.486485  175050 kubeadm.go:322] [certs] Using the existing "sa" key
	I1127 11:53:03.486730  175050 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1127 11:53:03.627591  175050 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1127 11:53:04.054264  175050 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1127 11:53:04.330147  175050 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1127 11:53:04.383408  175050 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1127 11:53:04.384279  175050 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1127 11:53:00.978258  175460 pod_ready.go:102] pod "metrics-server-57f55c9bc5-mbpl7" in "kube-system" namespace has status "Ready":"False"
	I1127 11:53:02.671025  175460 pod_ready.go:81] duration metric: took 4m0.000839682s waiting for pod "metrics-server-57f55c9bc5-mbpl7" in "kube-system" namespace to be "Ready" ...
	E1127 11:53:02.671063  175460 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1127 11:53:02.671073  175460 pod_ready.go:38] duration metric: took 4m4.167336243s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1127 11:53:02.671095  175460 api_server.go:52] waiting for apiserver process to appear ...
	I1127 11:53:02.671177  175460 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1127 11:53:02.697481  175460 logs.go:284] 2 containers: [eed1658a83d4 9d01d9bc1669]
	I1127 11:53:02.697560  175460 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1127 11:53:02.719775  175460 logs.go:284] 2 containers: [95fbed7a0c30 82ad15ab39ba]
	I1127 11:53:02.719843  175460 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1127 11:53:02.740184  175460 logs.go:284] 2 containers: [7b6639ff5bf1 5ec6011af158]
	I1127 11:53:02.740282  175460 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1127 11:53:02.761439  175460 logs.go:284] 2 containers: [ec97643bbdaa c9b5e06f7ed2]
	I1127 11:53:02.761522  175460 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1127 11:53:02.786252  175460 logs.go:284] 2 containers: [cbb0ebc02945 9deab8695a9f]
	I1127 11:53:02.786347  175460 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1127 11:53:02.817631  175460 logs.go:284] 2 containers: [e4b00b929255 4134c5824af5]
	I1127 11:53:02.817728  175460 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1127 11:53:02.844519  175460 logs.go:284] 0 containers: []
	W1127 11:53:02.844547  175460 logs.go:286] No container was found matching "kindnet"
	I1127 11:53:02.844599  175460 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1127 11:53:02.867247  175460 logs.go:284] 1 containers: [37775197b0a6]
	I1127 11:53:02.867332  175460 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1127 11:53:02.896994  175460 logs.go:284] 2 containers: [b2e193a8c59b 0574a2a7879e]
	I1127 11:53:02.897033  175460 logs.go:123] Gathering logs for kube-proxy [cbb0ebc02945] ...
	I1127 11:53:02.897046  175460 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cbb0ebc02945"
	I1127 11:53:02.946270  175460 logs.go:123] Gathering logs for kube-proxy [9deab8695a9f] ...
	I1127 11:53:02.946315  175460 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9deab8695a9f"
	I1127 11:53:02.972788  175460 logs.go:123] Gathering logs for kubernetes-dashboard [37775197b0a6] ...
	I1127 11:53:02.972827  175460 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 37775197b0a6"
	I1127 11:53:02.998753  175460 logs.go:123] Gathering logs for storage-provisioner [b2e193a8c59b] ...
	I1127 11:53:02.998782  175460 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2e193a8c59b"
	I1127 11:53:03.024278  175460 logs.go:123] Gathering logs for storage-provisioner [0574a2a7879e] ...
	I1127 11:53:03.024319  175460 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0574a2a7879e"
	I1127 11:53:03.052932  175460 logs.go:123] Gathering logs for kube-apiserver [9d01d9bc1669] ...
	I1127 11:53:03.052977  175460 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d01d9bc1669"
	I1127 11:53:03.108638  175460 logs.go:123] Gathering logs for etcd [95fbed7a0c30] ...
	I1127 11:53:03.108677  175460 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95fbed7a0c30"
	I1127 11:53:03.150913  175460 logs.go:123] Gathering logs for coredns [7b6639ff5bf1] ...
	I1127 11:53:03.150948  175460 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b6639ff5bf1"
	I1127 11:53:03.174649  175460 logs.go:123] Gathering logs for Docker ...
	I1127 11:53:03.174680  175460 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1127 11:53:03.237785  175460 logs.go:123] Gathering logs for container status ...
	I1127 11:53:03.237824  175460 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1127 11:53:03.333929  175460 logs.go:123] Gathering logs for kubelet ...
	I1127 11:53:03.333975  175460 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1127 11:53:03.405974  175460 logs.go:123] Gathering logs for kube-apiserver [eed1658a83d4] ...
	I1127 11:53:03.406015  175460 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eed1658a83d4"
	I1127 11:53:03.448563  175460 logs.go:123] Gathering logs for kube-controller-manager [4134c5824af5] ...
	I1127 11:53:03.448612  175460 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4134c5824af5"
	I1127 11:53:03.500246  175460 logs.go:123] Gathering logs for describe nodes ...
	I1127 11:53:03.500273  175460 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1127 11:53:03.666001  175460 logs.go:123] Gathering logs for coredns [5ec6011af158] ...
	I1127 11:53:03.666037  175460 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ec6011af158"
	I1127 11:53:03.693039  175460 logs.go:123] Gathering logs for kube-scheduler [c9b5e06f7ed2] ...
	I1127 11:53:03.693069  175460 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9b5e06f7ed2"
	I1127 11:53:03.731070  175460 logs.go:123] Gathering logs for kube-controller-manager [e4b00b929255] ...
	I1127 11:53:03.731100  175460 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4b00b929255"
	I1127 11:53:03.769823  175460 logs.go:123] Gathering logs for dmesg ...
	I1127 11:53:03.769851  175460 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1127 11:53:03.783799  175460 logs.go:123] Gathering logs for etcd [82ad15ab39ba] ...
	I1127 11:53:03.783828  175460 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82ad15ab39ba"
	I1127 11:53:03.814717  175460 logs.go:123] Gathering logs for kube-scheduler [ec97643bbdaa] ...
	I1127 11:53:03.814746  175460 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec97643bbdaa"
	I1127 11:53:04.385976  175050 out.go:204]   - Booting up control plane ...
	I1127 11:53:04.386082  175050 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1127 11:53:04.391450  175050 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1127 11:53:04.392942  175050 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1127 11:53:04.393725  175050 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1127 11:53:04.395939  175050 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1127 11:53:06.362146  175460 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1127 11:53:06.380606  175460 api_server.go:72] duration metric: took 4m15.431247283s to wait for apiserver process to appear ...
	I1127 11:53:06.380636  175460 api_server.go:88] waiting for apiserver healthz status ...
	I1127 11:53:06.380726  175460 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1127 11:53:06.404458  175460 logs.go:284] 2 containers: [eed1658a83d4 9d01d9bc1669]
	I1127 11:53:06.404540  175460 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1127 11:53:06.430403  175460 logs.go:284] 2 containers: [95fbed7a0c30 82ad15ab39ba]
	I1127 11:53:06.430518  175460 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1127 11:53:06.454493  175460 logs.go:284] 2 containers: [7b6639ff5bf1 5ec6011af158]
	I1127 11:53:06.454592  175460 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1127 11:53:06.486785  175460 logs.go:284] 2 containers: [ec97643bbdaa c9b5e06f7ed2]
	I1127 11:53:06.486862  175460 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1127 11:53:06.507694  175460 logs.go:284] 2 containers: [cbb0ebc02945 9deab8695a9f]
	I1127 11:53:06.507775  175460 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1127 11:53:06.526054  175460 logs.go:284] 2 containers: [e4b00b929255 4134c5824af5]
	I1127 11:53:06.526120  175460 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1127 11:53:06.544403  175460 logs.go:284] 0 containers: []
	W1127 11:53:06.544422  175460 logs.go:286] No container was found matching "kindnet"
	I1127 11:53:06.544469  175460 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1127 11:53:06.564103  175460 logs.go:284] 1 containers: [37775197b0a6]
	I1127 11:53:06.564185  175460 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1127 11:53:06.588204  175460 logs.go:284] 2 containers: [b2e193a8c59b 0574a2a7879e]
	I1127 11:53:06.588246  175460 logs.go:123] Gathering logs for kube-apiserver [9d01d9bc1669] ...
	I1127 11:53:06.588260  175460 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d01d9bc1669"
	I1127 11:53:06.641373  175460 logs.go:123] Gathering logs for kube-proxy [cbb0ebc02945] ...
	I1127 11:53:06.641404  175460 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cbb0ebc02945"
	I1127 11:53:06.665106  175460 logs.go:123] Gathering logs for kube-controller-manager [e4b00b929255] ...
	I1127 11:53:06.665138  175460 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4b00b929255"
	I1127 11:53:06.716362  175460 logs.go:123] Gathering logs for kubernetes-dashboard [37775197b0a6] ...
	I1127 11:53:06.716399  175460 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 37775197b0a6"
	I1127 11:53:06.740840  175460 logs.go:123] Gathering logs for storage-provisioner [0574a2a7879e] ...
	I1127 11:53:06.740880  175460 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0574a2a7879e"
	I1127 11:53:06.766123  175460 logs.go:123] Gathering logs for Docker ...
	I1127 11:53:06.766152  175460 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1127 11:53:06.839732  175460 logs.go:123] Gathering logs for dmesg ...
	I1127 11:53:06.839772  175460 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1127 11:53:06.856334  175460 logs.go:123] Gathering logs for etcd [95fbed7a0c30] ...
	I1127 11:53:06.856367  175460 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95fbed7a0c30"
	I1127 11:53:06.910722  175460 logs.go:123] Gathering logs for kube-scheduler [ec97643bbdaa] ...
	I1127 11:53:06.910763  175460 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec97643bbdaa"
	I1127 11:53:06.936155  175460 logs.go:123] Gathering logs for storage-provisioner [b2e193a8c59b] ...
	I1127 11:53:06.936187  175460 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2e193a8c59b"
	I1127 11:53:06.963236  175460 logs.go:123] Gathering logs for kubelet ...
	I1127 11:53:06.963260  175460 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1127 11:53:07.050198  175460 logs.go:123] Gathering logs for kube-scheduler [c9b5e06f7ed2] ...
	I1127 11:53:07.050245  175460 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9b5e06f7ed2"
	I1127 11:53:07.086457  175460 logs.go:123] Gathering logs for kube-proxy [9deab8695a9f] ...
	I1127 11:53:07.086490  175460 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9deab8695a9f"
	I1127 11:53:07.108959  175460 logs.go:123] Gathering logs for describe nodes ...
	I1127 11:53:07.108990  175460 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1127 11:53:07.230905  175460 logs.go:123] Gathering logs for kube-apiserver [eed1658a83d4] ...
	I1127 11:53:07.230940  175460 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eed1658a83d4"
	I1127 11:53:07.272630  175460 logs.go:123] Gathering logs for etcd [82ad15ab39ba] ...
	I1127 11:53:07.272661  175460 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82ad15ab39ba"
	I1127 11:53:07.308886  175460 logs.go:123] Gathering logs for coredns [7b6639ff5bf1] ...
	I1127 11:53:07.308922  175460 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b6639ff5bf1"
	I1127 11:53:07.334486  175460 logs.go:123] Gathering logs for coredns [5ec6011af158] ...
	I1127 11:53:07.334519  175460 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ec6011af158"
	I1127 11:53:07.361279  175460 logs.go:123] Gathering logs for kube-controller-manager [4134c5824af5] ...
	I1127 11:53:07.361305  175460 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4134c5824af5"
	I1127 11:53:07.405694  175460 logs.go:123] Gathering logs for container status ...
	I1127 11:53:07.405738  175460 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1127 11:53:09.998422  175460 api_server.go:253] Checking apiserver healthz at https://192.168.50.175:8444/healthz ...
	I1127 11:53:10.003599  175460 api_server.go:279] https://192.168.50.175:8444/healthz returned 200:
	ok
	I1127 11:53:10.004936  175460 api_server.go:141] control plane version: v1.28.4
	I1127 11:53:10.004958  175460 api_server.go:131] duration metric: took 3.624313639s to wait for apiserver health ...
	I1127 11:53:10.004968  175460 system_pods.go:43] waiting for kube-system pods to appear ...
	I1127 11:53:10.005040  175460 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1127 11:53:10.029000  175460 logs.go:284] 2 containers: [eed1658a83d4 9d01d9bc1669]
	I1127 11:53:10.029084  175460 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1127 11:53:10.061481  175460 logs.go:284] 2 containers: [95fbed7a0c30 82ad15ab39ba]
	I1127 11:53:10.061555  175460 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1127 11:53:10.080645  175460 logs.go:284] 2 containers: [7b6639ff5bf1 5ec6011af158]
	I1127 11:53:10.080747  175460 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1127 11:53:10.099968  175460 logs.go:284] 2 containers: [ec97643bbdaa c9b5e06f7ed2]
	I1127 11:53:10.100042  175460 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1127 11:53:10.118349  175460 logs.go:284] 2 containers: [cbb0ebc02945 9deab8695a9f]
	I1127 11:53:10.118436  175460 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1127 11:53:10.139129  175460 logs.go:284] 2 containers: [e4b00b929255 4134c5824af5]
	I1127 11:53:10.139223  175460 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1127 11:53:10.161323  175460 logs.go:284] 0 containers: []
	W1127 11:53:10.161348  175460 logs.go:286] No container was found matching "kindnet"
	I1127 11:53:10.161394  175460 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1127 11:53:10.181355  175460 logs.go:284] 1 containers: [37775197b0a6]
	I1127 11:53:10.181454  175460 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1127 11:53:10.205443  175460 logs.go:284] 2 containers: [b2e193a8c59b 0574a2a7879e]
	I1127 11:53:10.205493  175460 logs.go:123] Gathering logs for kubernetes-dashboard [37775197b0a6] ...
	I1127 11:53:10.205509  175460 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 37775197b0a6"
	I1127 11:53:10.230463  175460 logs.go:123] Gathering logs for kube-proxy [9deab8695a9f] ...
	I1127 11:53:10.230499  175460 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9deab8695a9f"
	I1127 11:53:10.259576  175460 logs.go:123] Gathering logs for kube-controller-manager [4134c5824af5] ...
	I1127 11:53:10.259615  175460 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4134c5824af5"
	I1127 11:53:10.299611  175460 logs.go:123] Gathering logs for describe nodes ...
	I1127 11:53:10.299646  175460 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1127 11:53:10.453242  175460 logs.go:123] Gathering logs for etcd [95fbed7a0c30] ...
	I1127 11:53:10.453286  175460 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 95fbed7a0c30"
	I1127 11:53:10.525064  175460 logs.go:123] Gathering logs for kube-scheduler [c9b5e06f7ed2] ...
	I1127 11:53:10.525107  175460 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9b5e06f7ed2"
	I1127 11:53:10.557907  175460 logs.go:123] Gathering logs for storage-provisioner [b2e193a8c59b] ...
	I1127 11:53:10.557937  175460 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2e193a8c59b"
	I1127 11:53:10.581309  175460 logs.go:123] Gathering logs for storage-provisioner [0574a2a7879e] ...
	I1127 11:53:10.581339  175460 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0574a2a7879e"
	I1127 11:53:10.603462  175460 logs.go:123] Gathering logs for Docker ...
	I1127 11:53:10.603493  175460 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1127 11:53:10.662004  175460 logs.go:123] Gathering logs for kubelet ...
	I1127 11:53:10.662039  175460 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1127 11:53:10.737340  175460 logs.go:123] Gathering logs for dmesg ...
	I1127 11:53:10.737386  175460 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1127 11:53:10.752449  175460 logs.go:123] Gathering logs for container status ...
	I1127 11:53:10.752477  175460 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1127 11:53:10.827407  175460 logs.go:123] Gathering logs for kube-proxy [cbb0ebc02945] ...
	I1127 11:53:10.827446  175460 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cbb0ebc02945"
	I1127 11:53:10.854409  175460 logs.go:123] Gathering logs for kube-controller-manager [e4b00b929255] ...
	I1127 11:53:10.854444  175460 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4b00b929255"
	I1127 11:53:10.908491  175460 logs.go:123] Gathering logs for kube-apiserver [9d01d9bc1669] ...
	I1127 11:53:10.908539  175460 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d01d9bc1669"
	I1127 11:53:10.983293  175460 logs.go:123] Gathering logs for etcd [82ad15ab39ba] ...
	I1127 11:53:10.983326  175460 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82ad15ab39ba"
	I1127 11:53:11.019018  175460 logs.go:123] Gathering logs for coredns [5ec6011af158] ...
	I1127 11:53:11.019050  175460 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ec6011af158"
	I1127 11:53:11.042029  175460 logs.go:123] Gathering logs for kube-scheduler [ec97643bbdaa] ...
	I1127 11:53:11.042057  175460 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ec97643bbdaa"
	I1127 11:53:11.063867  175460 logs.go:123] Gathering logs for kube-apiserver [eed1658a83d4] ...
	I1127 11:53:11.063896  175460 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eed1658a83d4"
	I1127 11:53:11.098599  175460 logs.go:123] Gathering logs for coredns [7b6639ff5bf1] ...
	I1127 11:53:11.098630  175460 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7b6639ff5bf1"
	I1127 11:53:13.629490  175460 system_pods.go:59] 8 kube-system pods found
	I1127 11:53:13.629519  175460 system_pods.go:61] "coredns-5dd5756b68-vp8mt" [f3ddff38-6258-41d5-ac9c-98c6775fca67] Running
	I1127 11:53:13.629523  175460 system_pods.go:61] "etcd-default-k8s-diff-port-028212" [5e412f9b-9676-4088-9260-e3ad4e4b8141] Running
	I1127 11:53:13.629528  175460 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-028212" [4e42d6a0-d5d3-4a36-b2d4-611652414982] Running
	I1127 11:53:13.629532  175460 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-028212" [cbdf1880-9663-4765-aa1f-68a09a175566] Running
	I1127 11:53:13.629535  175460 system_pods.go:61] "kube-proxy-l845w" [8394f06d-be41-4417-8b3e-be40a33a3792] Running
	I1127 11:53:13.629539  175460 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-028212" [6aba3375-0932-42ae-ac61-3b00d4a0607b] Running
	I1127 11:53:13.629545  175460 system_pods.go:61] "metrics-server-57f55c9bc5-mbpl7" [63a4b15c-24df-49cf-81ff-7a310da18bf3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1127 11:53:13.629551  175460 system_pods.go:61] "storage-provisioner" [834b9496-2ae0-45a1-986e-d1813b9b0f50] Running
	I1127 11:53:13.629562  175460 system_pods.go:74] duration metric: took 3.624588316s to wait for pod list to return data ...
	I1127 11:53:13.629570  175460 default_sa.go:34] waiting for default service account to be created ...
	I1127 11:53:13.633271  175460 default_sa.go:45] found service account: "default"
	I1127 11:53:13.633300  175460 default_sa.go:55] duration metric: took 3.722592ms for default service account to be created ...
	I1127 11:53:13.633309  175460 system_pods.go:116] waiting for k8s-apps to be running ...
	I1127 11:53:13.639924  175460 system_pods.go:86] 8 kube-system pods found
	I1127 11:53:13.639950  175460 system_pods.go:89] "coredns-5dd5756b68-vp8mt" [f3ddff38-6258-41d5-ac9c-98c6775fca67] Running
	I1127 11:53:13.639960  175460 system_pods.go:89] "etcd-default-k8s-diff-port-028212" [5e412f9b-9676-4088-9260-e3ad4e4b8141] Running
	I1127 11:53:13.639968  175460 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-028212" [4e42d6a0-d5d3-4a36-b2d4-611652414982] Running
	I1127 11:53:13.639977  175460 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-028212" [cbdf1880-9663-4765-aa1f-68a09a175566] Running
	I1127 11:53:13.639983  175460 system_pods.go:89] "kube-proxy-l845w" [8394f06d-be41-4417-8b3e-be40a33a3792] Running
	I1127 11:53:13.639989  175460 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-028212" [6aba3375-0932-42ae-ac61-3b00d4a0607b] Running
	I1127 11:53:13.640000  175460 system_pods.go:89] "metrics-server-57f55c9bc5-mbpl7" [63a4b15c-24df-49cf-81ff-7a310da18bf3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1127 11:53:13.640012  175460 system_pods.go:89] "storage-provisioner" [834b9496-2ae0-45a1-986e-d1813b9b0f50] Running
	I1127 11:53:13.640022  175460 system_pods.go:126] duration metric: took 6.706838ms to wait for k8s-apps to be running ...
	I1127 11:53:13.640035  175460 system_svc.go:44] waiting for kubelet service to be running ....
	I1127 11:53:13.640093  175460 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1127 11:53:13.655785  175460 system_svc.go:56] duration metric: took 15.741489ms WaitForService to wait for kubelet.
	I1127 11:53:13.655813  175460 kubeadm.go:581] duration metric: took 4m22.706462509s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1127 11:53:13.655840  175460 node_conditions.go:102] verifying NodePressure condition ...
	I1127 11:53:13.660068  175460 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1127 11:53:13.660096  175460 node_conditions.go:123] node cpu capacity is 2
	I1127 11:53:13.660109  175460 node_conditions.go:105] duration metric: took 4.262435ms to run NodePressure ...
	I1127 11:53:13.660126  175460 start.go:228] waiting for startup goroutines ...
	I1127 11:53:13.660139  175460 start.go:233] waiting for cluster config update ...
	I1127 11:53:13.660155  175460 start.go:242] writing updated cluster config ...
	I1127 11:53:13.660486  175460 ssh_runner.go:195] Run: rm -f paused
	I1127 11:53:13.708673  175460 start.go:600] kubectl: 1.28.4, cluster: 1.28.4 (minor skew: 0)
	I1127 11:53:13.711401  175460 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-028212" cluster and "default" namespace by default
	I1127 11:53:14.400577  175050 kubeadm.go:322] [apiclient] All control plane components are healthy after 10.004089 seconds
	I1127 11:53:14.400766  175050 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1127 11:53:14.418306  175050 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I1127 11:53:14.955993  175050 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1127 11:53:14.956165  175050 kubeadm.go:322] [mark-control-plane] Marking the node old-k8s-version-337707 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I1127 11:53:15.463940  175050 kubeadm.go:322] [bootstrap-token] Using token: 0teqvq.4s116zw4t6t4pggd
	I1127 11:53:15.465647  175050 out.go:204]   - Configuring RBAC rules ...
	I1127 11:53:15.465796  175050 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1127 11:53:15.477463  175050 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1127 11:53:15.482801  175050 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1127 11:53:15.485362  175050 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1127 11:53:15.488499  175050 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1127 11:53:15.557244  175050 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1127 11:53:15.885926  175050 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1127 11:53:15.885953  175050 kubeadm.go:322] 
	I1127 11:53:15.886040  175050 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1127 11:53:15.886054  175050 kubeadm.go:322] 
	I1127 11:53:15.886160  175050 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1127 11:53:15.886175  175050 kubeadm.go:322] 
	I1127 11:53:15.886210  175050 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1127 11:53:15.886313  175050 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1127 11:53:15.886386  175050 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1127 11:53:15.886396  175050 kubeadm.go:322] 
	I1127 11:53:15.886475  175050 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1127 11:53:15.886577  175050 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1127 11:53:15.886666  175050 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1127 11:53:15.886683  175050 kubeadm.go:322] 
	I1127 11:53:15.886793  175050 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities 
	I1127 11:53:15.886909  175050 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1127 11:53:15.886920  175050 kubeadm.go:322] 
	I1127 11:53:15.887045  175050 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 0teqvq.4s116zw4t6t4pggd \
	I1127 11:53:15.887210  175050 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:fd8fee4179dfb986d324014921cfe97120e18a553951f83c01934cca0b94aeef \
	I1127 11:53:15.887244  175050 kubeadm.go:322]     --control-plane 	  
	I1127 11:53:15.887253  175050 kubeadm.go:322] 
	I1127 11:53:15.887350  175050 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1127 11:53:15.887359  175050 kubeadm.go:322] 
	I1127 11:53:15.887521  175050 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 0teqvq.4s116zw4t6t4pggd \
	I1127 11:53:15.887645  175050 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:fd8fee4179dfb986d324014921cfe97120e18a553951f83c01934cca0b94aeef 
	I1127 11:53:15.888902  175050 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I1127 11:53:15.889082  175050 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 18.09
	I1127 11:53:15.889206  175050 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1127 11:53:15.889240  175050 cni.go:84] Creating CNI manager for ""
	I1127 11:53:15.889257  175050 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1127 11:53:15.889278  175050 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1127 11:53:15.889389  175050 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:53:15.889389  175050 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=81390b5609e7feb2151fde4633273d04eb05a21f minikube.k8s.io/name=old-k8s-version-337707 minikube.k8s.io/updated_at=2023_11_27T11_53_15_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:53:15.926947  175050 ops.go:34] apiserver oom_adj: -16
	I1127 11:53:16.154511  175050 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:53:16.262318  175050 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:53:16.876193  175050 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:53:17.375568  175050 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:53:17.876367  175050 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:53:18.376387  175050 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:53:18.875954  175050 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:53:19.375568  175050 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:53:19.875868  175050 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:53:20.375701  175050 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:53:20.875727  175050 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:53:21.376243  175050 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:53:21.875542  175050 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:53:22.376111  175050 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:53:22.876122  175050 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:53:23.375884  175050 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:53:23.876386  175050 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:53:24.376202  175050 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:53:24.875952  175050 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:53:25.376315  175050 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:53:25.875947  175050 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:53:26.376345  175050 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:53:26.875945  175050 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:53:27.376395  175050 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:53:27.876384  175050 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:53:28.375927  175050 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:53:28.875489  175050 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:53:29.376388  175050 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:53:29.875563  175050 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:53:30.376439  175050 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1127 11:53:30.622516  175050 kubeadm.go:1081] duration metric: took 14.733184848s to wait for elevateKubeSystemPrivileges.
	I1127 11:53:30.622546  175050 kubeadm.go:406] StartCluster complete in 5m40.121565922s
	I1127 11:53:30.622571  175050 settings.go:142] acquiring lock: {Name:mk0bde143fb6a5b453a36dab2e4269e4e489beea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 11:53:30.622669  175050 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17644-122411/kubeconfig
	I1127 11:53:30.623293  175050 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17644-122411/kubeconfig: {Name:mk165b6db416838b8311934f21a494f4c2865dd3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1127 11:53:30.623525  175050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1127 11:53:30.623714  175050 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1127 11:53:30.623798  175050 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-337707"
	I1127 11:53:30.623811  175050 config.go:182] Loaded profile config "old-k8s-version-337707": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I1127 11:53:30.623824  175050 addons.go:231] Setting addon storage-provisioner=true in "old-k8s-version-337707"
	I1127 11:53:30.623825  175050 addons.go:69] Setting dashboard=true in profile "old-k8s-version-337707"
	W1127 11:53:30.623835  175050 addons.go:240] addon storage-provisioner should already be in state true
	I1127 11:53:30.623845  175050 addons.go:231] Setting addon dashboard=true in "old-k8s-version-337707"
	W1127 11:53:30.623853  175050 addons.go:240] addon dashboard should already be in state true
	I1127 11:53:30.623817  175050 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-337707"
	I1127 11:53:30.623890  175050 host.go:66] Checking if "old-k8s-version-337707" exists ...
	I1127 11:53:30.623896  175050 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-337707"
	I1127 11:53:30.623882  175050 cache.go:107] acquiring lock: {Name:mk395a86368ef8d463afdafe89a54fa575ce50bd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1127 11:53:30.623911  175050 host.go:66] Checking if "old-k8s-version-337707" exists ...
	I1127 11:53:30.623854  175050 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-337707"
	I1127 11:53:30.623973  175050 addons.go:231] Setting addon metrics-server=true in "old-k8s-version-337707"
	W1127 11:53:30.623984  175050 addons.go:240] addon metrics-server should already be in state true
	I1127 11:53:30.624023  175050 host.go:66] Checking if "old-k8s-version-337707" exists ...
	I1127 11:53:30.623941  175050 cache.go:115] /home/jenkins/minikube-integration/17644-122411/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2 exists
	I1127 11:53:30.624190  175050 cache.go:96] cache image "gcr.io/k8s-minikube/gvisor-addon:2" -> "/home/jenkins/minikube-integration/17644-122411/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2" took 310.917µs
	I1127 11:53:30.624203  175050 cache.go:80] save to tar file gcr.io/k8s-minikube/gvisor-addon:2 -> /home/jenkins/minikube-integration/17644-122411/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2 succeeded
	I1127 11:53:30.624212  175050 cache.go:87] Successfully saved all images to host disk.
	I1127 11:53:30.624315  175050 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1127 11:53:30.624320  175050 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1127 11:53:30.624313  175050 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1127 11:53:30.624340  175050 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 11:53:30.624358  175050 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 11:53:30.624398  175050 config.go:182] Loaded profile config "old-k8s-version-337707": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I1127 11:53:30.624403  175050 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1127 11:53:30.624426  175050 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 11:53:30.624429  175050 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 11:53:30.624716  175050 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1127 11:53:30.624738  175050 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 11:53:30.641076  175050 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38055
	I1127 11:53:30.641404  175050 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36139
	I1127 11:53:30.641598  175050 main.go:141] libmachine: () Calling .GetVersion
	I1127 11:53:30.641855  175050 main.go:141] libmachine: () Calling .GetVersion
	I1127 11:53:30.642172  175050 main.go:141] libmachine: Using API Version  1
	I1127 11:53:30.642196  175050 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 11:53:30.642328  175050 main.go:141] libmachine: Using API Version  1
	I1127 11:53:30.642346  175050 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 11:53:30.642405  175050 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43783
	I1127 11:53:30.642564  175050 main.go:141] libmachine: () Calling .GetMachineName
	I1127 11:53:30.642763  175050 main.go:141] libmachine: () Calling .GetMachineName
	I1127 11:53:30.642763  175050 main.go:141] libmachine: (old-k8s-version-337707) Calling .GetState
	I1127 11:53:30.642787  175050 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32981
	I1127 11:53:30.642968  175050 main.go:141] libmachine: () Calling .GetVersion
	I1127 11:53:30.643124  175050 main.go:141] libmachine: (old-k8s-version-337707) Calling .GetState
	I1127 11:53:30.643442  175050 main.go:141] libmachine: Using API Version  1
	I1127 11:53:30.643473  175050 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 11:53:30.643527  175050 main.go:141] libmachine: () Calling .GetVersion
	I1127 11:53:30.643892  175050 main.go:141] libmachine: () Calling .GetMachineName
	I1127 11:53:30.644041  175050 main.go:141] libmachine: Using API Version  1
	I1127 11:53:30.644059  175050 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 11:53:30.644522  175050 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1127 11:53:30.644567  175050 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 11:53:30.644764  175050 main.go:141] libmachine: () Calling .GetMachineName
	I1127 11:53:30.645294  175050 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1127 11:53:30.645324  175050 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 11:53:30.645516  175050 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1127 11:53:30.645559  175050 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 11:53:30.646583  175050 addons.go:231] Setting addon default-storageclass=true in "old-k8s-version-337707"
	W1127 11:53:30.646607  175050 addons.go:240] addon default-storageclass should already be in state true
	I1127 11:53:30.646633  175050 host.go:66] Checking if "old-k8s-version-337707" exists ...
	I1127 11:53:30.647005  175050 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1127 11:53:30.647033  175050 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 11:53:30.649813  175050 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41593
	I1127 11:53:30.650175  175050 main.go:141] libmachine: () Calling .GetVersion
	I1127 11:53:30.650704  175050 main.go:141] libmachine: Using API Version  1
	I1127 11:53:30.650724  175050 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 11:53:30.651174  175050 main.go:141] libmachine: () Calling .GetMachineName
	I1127 11:53:30.651644  175050 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1127 11:53:30.651681  175050 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 11:53:30.664177  175050 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46809
	I1127 11:53:30.664513  175050 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35687
	I1127 11:53:30.664681  175050 main.go:141] libmachine: () Calling .GetVersion
	I1127 11:53:30.664963  175050 main.go:141] libmachine: () Calling .GetVersion
	I1127 11:53:30.665120  175050 main.go:141] libmachine: Using API Version  1
	I1127 11:53:30.665133  175050 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 11:53:30.665408  175050 main.go:141] libmachine: Using API Version  1
	I1127 11:53:30.665432  175050 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 11:53:30.665492  175050 main.go:141] libmachine: () Calling .GetMachineName
	I1127 11:53:30.665671  175050 main.go:141] libmachine: (old-k8s-version-337707) Calling .GetState
	I1127 11:53:30.665868  175050 main.go:141] libmachine: () Calling .GetMachineName
	I1127 11:53:30.666084  175050 main.go:141] libmachine: (old-k8s-version-337707) Calling .GetState
	I1127 11:53:30.666428  175050 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35931
	I1127 11:53:30.667014  175050 main.go:141] libmachine: () Calling .GetVersion
	I1127 11:53:30.668180  175050 main.go:141] libmachine: (old-k8s-version-337707) Calling .DriverName
	I1127 11:53:30.668270  175050 main.go:141] libmachine: Using API Version  1
	I1127 11:53:30.668277  175050 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 11:53:30.670626  175050 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1127 11:53:30.668659  175050 main.go:141] libmachine: () Calling .GetMachineName
	I1127 11:53:30.668964  175050 main.go:141] libmachine: (old-k8s-version-337707) Calling .DriverName
	I1127 11:53:30.672265  175050 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1127 11:53:30.672288  175050 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1127 11:53:30.672308  175050 main.go:141] libmachine: (old-k8s-version-337707) Calling .GetSSHHostname
	I1127 11:53:30.673849  175050 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I1127 11:53:30.672392  175050 main.go:141] libmachine: (old-k8s-version-337707) Calling .DriverName
	I1127 11:53:30.675191  175050 main.go:141] libmachine: (old-k8s-version-337707) DBG | domain old-k8s-version-337707 has defined MAC address 52:54:00:3b:9d:ae in network mk-old-k8s-version-337707
	I1127 11:53:30.676421  175050 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1127 11:53:30.677737  175050 addons.go:423] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1127 11:53:30.677752  175050 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1127 11:53:30.677771  175050 main.go:141] libmachine: (old-k8s-version-337707) Calling .GetSSHHostname
	I1127 11:53:30.675503  175050 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1127 11:53:30.677840  175050 main.go:141] libmachine: (old-k8s-version-337707) Calling .GetSSHHostname
	I1127 11:53:30.675668  175050 main.go:141] libmachine: (old-k8s-version-337707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:9d:ae", ip: ""} in network mk-old-k8s-version-337707: {Iface:virbr3 ExpiryTime:2023-11-27 12:47:27 +0000 UTC Type:0 Mac:52:54:00:3b:9d:ae Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:old-k8s-version-337707 Clientid:01:52:54:00:3b:9d:ae}
	I1127 11:53:30.677902  175050 main.go:141] libmachine: (old-k8s-version-337707) DBG | domain old-k8s-version-337707 has defined IP address 192.168.61.126 and MAC address 52:54:00:3b:9d:ae in network mk-old-k8s-version-337707
	I1127 11:53:30.675914  175050 main.go:141] libmachine: (old-k8s-version-337707) Calling .GetSSHPort
	I1127 11:53:30.678066  175050 main.go:141] libmachine: (old-k8s-version-337707) Calling .GetSSHKeyPath
	I1127 11:53:30.678260  175050 main.go:141] libmachine: (old-k8s-version-337707) Calling .GetSSHUsername
	I1127 11:53:30.678420  175050 sshutil.go:53] new ssh client: &{IP:192.168.61.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17644-122411/.minikube/machines/old-k8s-version-337707/id_rsa Username:docker}
	I1127 11:53:30.681272  175050 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34763
	I1127 11:53:30.681315  175050 main.go:141] libmachine: (old-k8s-version-337707) DBG | domain old-k8s-version-337707 has defined MAC address 52:54:00:3b:9d:ae in network mk-old-k8s-version-337707
	I1127 11:53:30.681689  175050 main.go:141] libmachine: () Calling .GetVersion
	I1127 11:53:30.681743  175050 main.go:141] libmachine: (old-k8s-version-337707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:9d:ae", ip: ""} in network mk-old-k8s-version-337707: {Iface:virbr3 ExpiryTime:2023-11-27 12:47:27 +0000 UTC Type:0 Mac:52:54:00:3b:9d:ae Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:old-k8s-version-337707 Clientid:01:52:54:00:3b:9d:ae}
	I1127 11:53:30.681769  175050 main.go:141] libmachine: (old-k8s-version-337707) DBG | domain old-k8s-version-337707 has defined IP address 192.168.61.126 and MAC address 52:54:00:3b:9d:ae in network mk-old-k8s-version-337707
	I1127 11:53:30.681986  175050 main.go:141] libmachine: (old-k8s-version-337707) Calling .GetSSHPort
	I1127 11:53:30.682228  175050 main.go:141] libmachine: Using API Version  1
	I1127 11:53:30.682245  175050 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 11:53:30.682270  175050 main.go:141] libmachine: (old-k8s-version-337707) Calling .GetSSHKeyPath
	I1127 11:53:30.682427  175050 main.go:141] libmachine: (old-k8s-version-337707) Calling .GetSSHUsername
	I1127 11:53:30.682563  175050 sshutil.go:53] new ssh client: &{IP:192.168.61.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17644-122411/.minikube/machines/old-k8s-version-337707/id_rsa Username:docker}
	I1127 11:53:30.682678  175050 main.go:141] libmachine: () Calling .GetMachineName
	I1127 11:53:30.683141  175050 main.go:141] libmachine: (old-k8s-version-337707) DBG | domain old-k8s-version-337707 has defined MAC address 52:54:00:3b:9d:ae in network mk-old-k8s-version-337707
	I1127 11:53:30.683578  175050 main.go:141] libmachine: (old-k8s-version-337707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:9d:ae", ip: ""} in network mk-old-k8s-version-337707: {Iface:virbr3 ExpiryTime:2023-11-27 12:47:27 +0000 UTC Type:0 Mac:52:54:00:3b:9d:ae Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:old-k8s-version-337707 Clientid:01:52:54:00:3b:9d:ae}
	I1127 11:53:30.683594  175050 main.go:141] libmachine: (old-k8s-version-337707) DBG | domain old-k8s-version-337707 has defined IP address 192.168.61.126 and MAC address 52:54:00:3b:9d:ae in network mk-old-k8s-version-337707
	I1127 11:53:30.683734  175050 main.go:141] libmachine: (old-k8s-version-337707) Calling .GetSSHPort
	I1127 11:53:30.683930  175050 main.go:141] libmachine: (old-k8s-version-337707) Calling .GetSSHKeyPath
	I1127 11:53:30.684067  175050 main.go:141] libmachine: (old-k8s-version-337707) Calling .GetSSHUsername
	I1127 11:53:30.684200  175050 sshutil.go:53] new ssh client: &{IP:192.168.61.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17644-122411/.minikube/machines/old-k8s-version-337707/id_rsa Username:docker}
	I1127 11:53:30.687482  175050 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1127 11:53:30.687518  175050 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 11:53:30.701727  175050 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40305
	I1127 11:53:30.702131  175050 main.go:141] libmachine: () Calling .GetVersion
	I1127 11:53:30.702613  175050 main.go:141] libmachine: Using API Version  1
	I1127 11:53:30.702630  175050 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 11:53:30.702944  175050 main.go:141] libmachine: () Calling .GetMachineName
	I1127 11:53:30.703176  175050 main.go:141] libmachine: (old-k8s-version-337707) Calling .GetState
	I1127 11:53:30.704621  175050 main.go:141] libmachine: (old-k8s-version-337707) Calling .DriverName
	I1127 11:53:30.704910  175050 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1127 11:53:30.704931  175050 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1127 11:53:30.704950  175050 main.go:141] libmachine: (old-k8s-version-337707) Calling .GetSSHHostname
	I1127 11:53:30.707466  175050 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35149
	I1127 11:53:30.707700  175050 main.go:141] libmachine: (old-k8s-version-337707) DBG | domain old-k8s-version-337707 has defined MAC address 52:54:00:3b:9d:ae in network mk-old-k8s-version-337707
	I1127 11:53:30.707925  175050 main.go:141] libmachine: () Calling .GetVersion
	I1127 11:53:30.708166  175050 main.go:141] libmachine: (old-k8s-version-337707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:9d:ae", ip: ""} in network mk-old-k8s-version-337707: {Iface:virbr3 ExpiryTime:2023-11-27 12:47:27 +0000 UTC Type:0 Mac:52:54:00:3b:9d:ae Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:old-k8s-version-337707 Clientid:01:52:54:00:3b:9d:ae}
	I1127 11:53:30.708193  175050 main.go:141] libmachine: (old-k8s-version-337707) DBG | domain old-k8s-version-337707 has defined IP address 192.168.61.126 and MAC address 52:54:00:3b:9d:ae in network mk-old-k8s-version-337707
	I1127 11:53:30.708369  175050 main.go:141] libmachine: Using API Version  1
	I1127 11:53:30.708391  175050 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 11:53:30.708461  175050 main.go:141] libmachine: (old-k8s-version-337707) Calling .GetSSHPort
	I1127 11:53:30.708624  175050 main.go:141] libmachine: (old-k8s-version-337707) Calling .GetSSHKeyPath
	I1127 11:53:30.708691  175050 main.go:141] libmachine: () Calling .GetMachineName
	I1127 11:53:30.708766  175050 main.go:141] libmachine: (old-k8s-version-337707) Calling .GetSSHUsername
	I1127 11:53:30.708865  175050 main.go:141] libmachine: (old-k8s-version-337707) Calling .GetState
	I1127 11:53:30.708922  175050 sshutil.go:53] new ssh client: &{IP:192.168.61.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17644-122411/.minikube/machines/old-k8s-version-337707/id_rsa Username:docker}
	I1127 11:53:30.710323  175050 main.go:141] libmachine: (old-k8s-version-337707) Calling .DriverName
	I1127 11:53:30.712378  175050 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1127 11:53:30.713829  175050 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1127 11:53:30.713856  175050 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1127 11:53:30.713902  175050 main.go:141] libmachine: (old-k8s-version-337707) Calling .GetSSHHostname
	I1127 11:53:30.716470  175050 main.go:141] libmachine: (old-k8s-version-337707) DBG | domain old-k8s-version-337707 has defined MAC address 52:54:00:3b:9d:ae in network mk-old-k8s-version-337707
	I1127 11:53:30.716761  175050 main.go:141] libmachine: (old-k8s-version-337707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:9d:ae", ip: ""} in network mk-old-k8s-version-337707: {Iface:virbr3 ExpiryTime:2023-11-27 12:47:27 +0000 UTC Type:0 Mac:52:54:00:3b:9d:ae Iaid: IPaddr:192.168.61.126 Prefix:24 Hostname:old-k8s-version-337707 Clientid:01:52:54:00:3b:9d:ae}
	I1127 11:53:30.716789  175050 main.go:141] libmachine: (old-k8s-version-337707) DBG | domain old-k8s-version-337707 has defined IP address 192.168.61.126 and MAC address 52:54:00:3b:9d:ae in network mk-old-k8s-version-337707
	I1127 11:53:30.716906  175050 main.go:141] libmachine: (old-k8s-version-337707) Calling .GetSSHPort
	I1127 11:53:30.717059  175050 main.go:141] libmachine: (old-k8s-version-337707) Calling .GetSSHKeyPath
	I1127 11:53:30.717189  175050 main.go:141] libmachine: (old-k8s-version-337707) Calling .GetSSHUsername
	I1127 11:53:30.717304  175050 sshutil.go:53] new ssh client: &{IP:192.168.61.126 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17644-122411/.minikube/machines/old-k8s-version-337707/id_rsa Username:docker}
	I1127 11:53:30.790071  175050 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-337707" context rescaled to 1 replicas
	I1127 11:53:30.790118  175050 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.126 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1127 11:53:30.791724  175050 out.go:177] * Verifying Kubernetes components...
	I1127 11:53:30.793199  175050 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1127 11:53:30.998849  175050 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1127 11:53:31.031738  175050 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1127 11:53:31.031759  175050 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1127 11:53:31.078498  175050 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-337707" to be "Ready" ...
	I1127 11:53:31.078601  175050 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1127 11:53:31.081604  175050 node_ready.go:49] node "old-k8s-version-337707" has status "Ready":"True"
	I1127 11:53:31.081626  175050 node_ready.go:38] duration metric: took 3.101025ms waiting for node "old-k8s-version-337707" to be "Ready" ...
	I1127 11:53:31.081635  175050 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1127 11:53:31.085226  175050 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-745v8" in "kube-system" namespace to be "Ready" ...
	I1127 11:53:31.123240  175050 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1127 11:53:31.164364  175050 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1127 11:53:31.164390  175050 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1127 11:53:31.213387  175050 addons.go:423] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1127 11:53:31.213413  175050 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1127 11:53:31.241745  175050 docker.go:671] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/gvisor-addon:2
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	k8s.gcr.io/pause:3.1
	registry.k8s.io/pause:3.1
	
	-- /stdout --
	I1127 11:53:31.241773  175050 cache_images.go:84] Images are preloaded, skipping loading
	I1127 11:53:31.241784  175050 cache_images.go:262] succeeded pushing to: old-k8s-version-337707
	I1127 11:53:31.241787  175050 cache_images.go:263] failed pushing to: 
	I1127 11:53:31.241815  175050 main.go:141] libmachine: Making call to close driver server
	I1127 11:53:31.241826  175050 main.go:141] libmachine: (old-k8s-version-337707) Calling .Close
	I1127 11:53:31.242144  175050 main.go:141] libmachine: (old-k8s-version-337707) DBG | Closing plugin on server side
	I1127 11:53:31.242198  175050 main.go:141] libmachine: Successfully made call to close driver server
	I1127 11:53:31.242207  175050 main.go:141] libmachine: Making call to close connection to plugin binary
	I1127 11:53:31.242222  175050 main.go:141] libmachine: Making call to close driver server
	I1127 11:53:31.242231  175050 main.go:141] libmachine: (old-k8s-version-337707) Calling .Close
	I1127 11:53:31.242499  175050 main.go:141] libmachine: Successfully made call to close driver server
	I1127 11:53:31.242520  175050 main.go:141] libmachine: Making call to close connection to plugin binary
	I1127 11:53:31.242526  175050 main.go:141] libmachine: (old-k8s-version-337707) DBG | Closing plugin on server side
	I1127 11:53:31.330606  175050 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1127 11:53:31.330635  175050 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1127 11:53:31.341809  175050 addons.go:423] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1127 11:53:31.341834  175050 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1127 11:53:31.449132  175050 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1127 11:53:31.524118  175050 addons.go:423] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1127 11:53:31.524143  175050 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1127 11:53:31.586489  175050 addons.go:423] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1127 11:53:31.586514  175050 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1127 11:53:31.712553  175050 addons.go:423] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1127 11:53:31.712583  175050 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1127 11:53:31.740057  175050 addons.go:423] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1127 11:53:31.740082  175050 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1127 11:53:31.905723  175050 addons.go:423] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1127 11:53:31.905746  175050 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1127 11:53:31.933735  175050 addons.go:423] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1127 11:53:31.933759  175050 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1127 11:53:31.996178  175050 addons.go:423] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1127 11:53:31.996212  175050 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1127 11:53:32.048802  175050 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1127 11:53:32.542326  175050 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.543436606s)
	I1127 11:53:32.542354  175050 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.463724603s)
	I1127 11:53:32.542376  175050 main.go:141] libmachine: Making call to close driver server
	I1127 11:53:32.542387  175050 main.go:141] libmachine: (old-k8s-version-337707) Calling .Close
	I1127 11:53:32.542373  175050 start.go:926] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I1127 11:53:32.542749  175050 main.go:141] libmachine: Successfully made call to close driver server
	I1127 11:53:32.542777  175050 main.go:141] libmachine: Making call to close connection to plugin binary
	I1127 11:53:32.542789  175050 main.go:141] libmachine: Making call to close driver server
	I1127 11:53:32.542799  175050 main.go:141] libmachine: (old-k8s-version-337707) Calling .Close
	I1127 11:53:32.543065  175050 main.go:141] libmachine: Successfully made call to close driver server
	I1127 11:53:32.543095  175050 main.go:141] libmachine: (old-k8s-version-337707) DBG | Closing plugin on server side
	I1127 11:53:32.543099  175050 main.go:141] libmachine: Making call to close connection to plugin binary
	I1127 11:53:32.553182  175050 main.go:141] libmachine: Making call to close driver server
	I1127 11:53:32.553199  175050 main.go:141] libmachine: (old-k8s-version-337707) Calling .Close
	I1127 11:53:32.553489  175050 main.go:141] libmachine: Successfully made call to close driver server
	I1127 11:53:32.553512  175050 main.go:141] libmachine: Making call to close connection to plugin binary
	I1127 11:53:32.625005  175050 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.501725962s)
	I1127 11:53:32.625055  175050 main.go:141] libmachine: Making call to close driver server
	I1127 11:53:32.625068  175050 main.go:141] libmachine: (old-k8s-version-337707) Calling .Close
	I1127 11:53:32.625339  175050 main.go:141] libmachine: Successfully made call to close driver server
	I1127 11:53:32.625362  175050 main.go:141] libmachine: Making call to close connection to plugin binary
	I1127 11:53:32.625374  175050 main.go:141] libmachine: Making call to close driver server
	I1127 11:53:32.625384  175050 main.go:141] libmachine: (old-k8s-version-337707) Calling .Close
	I1127 11:53:32.625644  175050 main.go:141] libmachine: (old-k8s-version-337707) DBG | Closing plugin on server side
	I1127 11:53:32.625680  175050 main.go:141] libmachine: Successfully made call to close driver server
	I1127 11:53:32.625692  175050 main.go:141] libmachine: Making call to close connection to plugin binary
	I1127 11:53:32.914728  175050 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.465554657s)
	I1127 11:53:32.914777  175050 main.go:141] libmachine: Making call to close driver server
	I1127 11:53:32.914787  175050 main.go:141] libmachine: (old-k8s-version-337707) Calling .Close
	I1127 11:53:32.915109  175050 main.go:141] libmachine: Successfully made call to close driver server
	I1127 11:53:32.915129  175050 main.go:141] libmachine: Making call to close connection to plugin binary
	I1127 11:53:32.915128  175050 main.go:141] libmachine: (old-k8s-version-337707) DBG | Closing plugin on server side
	I1127 11:53:32.915145  175050 main.go:141] libmachine: Making call to close driver server
	I1127 11:53:32.915156  175050 main.go:141] libmachine: (old-k8s-version-337707) Calling .Close
	I1127 11:53:32.915459  175050 main.go:141] libmachine: Successfully made call to close driver server
	I1127 11:53:32.915485  175050 main.go:141] libmachine: Making call to close connection to plugin binary
	I1127 11:53:32.915496  175050 addons.go:467] Verifying addon metrics-server=true in "old-k8s-version-337707"
	I1127 11:53:32.915537  175050 main.go:141] libmachine: (old-k8s-version-337707) DBG | Closing plugin on server side
	I1127 11:53:33.177489  175050 pod_ready.go:102] pod "coredns-5644d7b6d9-745v8" in "kube-system" namespace has status "Ready":"False"
	I1127 11:53:33.348277  175050 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.299429406s)
	I1127 11:53:33.348338  175050 main.go:141] libmachine: Making call to close driver server
	I1127 11:53:33.348353  175050 main.go:141] libmachine: (old-k8s-version-337707) Calling .Close
	I1127 11:53:33.348725  175050 main.go:141] libmachine: Successfully made call to close driver server
	I1127 11:53:33.348743  175050 main.go:141] libmachine: Making call to close connection to plugin binary
	I1127 11:53:33.348753  175050 main.go:141] libmachine: Making call to close driver server
	I1127 11:53:33.348728  175050 main.go:141] libmachine: (old-k8s-version-337707) DBG | Closing plugin on server side
	I1127 11:53:33.348765  175050 main.go:141] libmachine: (old-k8s-version-337707) Calling .Close
	I1127 11:53:33.349017  175050 main.go:141] libmachine: Successfully made call to close driver server
	I1127 11:53:33.349061  175050 main.go:141] libmachine: Making call to close connection to plugin binary
	I1127 11:53:33.350624  175050 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-337707 addons enable metrics-server	
	
	
	I1127 11:53:33.349043  175050 main.go:141] libmachine: (old-k8s-version-337707) DBG | Closing plugin on server side
	I1127 11:53:33.353498  175050 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I1127 11:53:33.354910  175050 addons.go:502] enable addons completed in 2.731206858s: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I1127 11:53:35.601357  175050 pod_ready.go:102] pod "coredns-5644d7b6d9-745v8" in "kube-system" namespace has status "Ready":"False"
	I1127 11:53:37.613812  175050 pod_ready.go:102] pod "coredns-5644d7b6d9-745v8" in "kube-system" namespace has status "Ready":"False"
	I1127 11:53:38.101475  175050 pod_ready.go:92] pod "coredns-5644d7b6d9-745v8" in "kube-system" namespace has status "Ready":"True"
	I1127 11:53:38.101501  175050 pod_ready.go:81] duration metric: took 7.016255481s waiting for pod "coredns-5644d7b6d9-745v8" in "kube-system" namespace to be "Ready" ...
	I1127 11:53:38.101510  175050 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-ddg5h" in "kube-system" namespace to be "Ready" ...
	I1127 11:53:38.106651  175050 pod_ready.go:92] pod "kube-proxy-ddg5h" in "kube-system" namespace has status "Ready":"True"
	I1127 11:53:38.106672  175050 pod_ready.go:81] duration metric: took 5.155968ms waiting for pod "kube-proxy-ddg5h" in "kube-system" namespace to be "Ready" ...
	I1127 11:53:38.106680  175050 pod_ready.go:38] duration metric: took 7.025036043s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1127 11:53:38.106719  175050 api_server.go:52] waiting for apiserver process to appear ...
	I1127 11:53:38.106773  175050 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1127 11:53:38.122116  175050 api_server.go:72] duration metric: took 7.331966111s to wait for apiserver process to appear ...
	I1127 11:53:38.122135  175050 api_server.go:88] waiting for apiserver healthz status ...
	I1127 11:53:38.122149  175050 api_server.go:253] Checking apiserver healthz at https://192.168.61.126:8443/healthz ...
	I1127 11:53:38.128299  175050 api_server.go:279] https://192.168.61.126:8443/healthz returned 200:
	ok
	I1127 11:53:38.129117  175050 api_server.go:141] control plane version: v1.16.0
	I1127 11:53:38.129132  175050 api_server.go:131] duration metric: took 6.990812ms to wait for apiserver health ...
	I1127 11:53:38.129140  175050 system_pods.go:43] waiting for kube-system pods to appear ...
	I1127 11:53:38.132204  175050 system_pods.go:59] 4 kube-system pods found
	I1127 11:53:38.132226  175050 system_pods.go:61] "coredns-5644d7b6d9-745v8" [5fe79e34-b269-4c80-879f-d2a92b10bb2d] Running
	I1127 11:53:38.132230  175050 system_pods.go:61] "kube-proxy-ddg5h" [9776e525-930f-44c8-83cf-5fdf44dfda52] Running
	I1127 11:53:38.132236  175050 system_pods.go:61] "metrics-server-74d5856cc6-22vqx" [81a4a56f-67ac-41e9-a96a-4c7551acb859] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1127 11:53:38.132249  175050 system_pods.go:61] "storage-provisioner" [3594a7b3-6dcd-4f52-b549-49e33445b763] Running
	I1127 11:53:38.132256  175050 system_pods.go:74] duration metric: took 3.111491ms to wait for pod list to return data ...
	I1127 11:53:38.132263  175050 default_sa.go:34] waiting for default service account to be created ...
	I1127 11:53:38.134649  175050 default_sa.go:45] found service account: "default"
	I1127 11:53:38.134665  175050 default_sa.go:55] duration metric: took 2.396678ms for default service account to be created ...
	I1127 11:53:38.134671  175050 system_pods.go:116] waiting for k8s-apps to be running ...
	I1127 11:53:38.138143  175050 system_pods.go:86] 4 kube-system pods found
	I1127 11:53:38.138164  175050 system_pods.go:89] "coredns-5644d7b6d9-745v8" [5fe79e34-b269-4c80-879f-d2a92b10bb2d] Running
	I1127 11:53:38.138172  175050 system_pods.go:89] "kube-proxy-ddg5h" [9776e525-930f-44c8-83cf-5fdf44dfda52] Running
	I1127 11:53:38.138185  175050 system_pods.go:89] "metrics-server-74d5856cc6-22vqx" [81a4a56f-67ac-41e9-a96a-4c7551acb859] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1127 11:53:38.138192  175050 system_pods.go:89] "storage-provisioner" [3594a7b3-6dcd-4f52-b549-49e33445b763] Running
	I1127 11:53:38.138212  175050 retry.go:31] will retry after 188.356277ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1127 11:53:38.331649  175050 system_pods.go:86] 4 kube-system pods found
	I1127 11:53:38.331682  175050 system_pods.go:89] "coredns-5644d7b6d9-745v8" [5fe79e34-b269-4c80-879f-d2a92b10bb2d] Running
	I1127 11:53:38.331692  175050 system_pods.go:89] "kube-proxy-ddg5h" [9776e525-930f-44c8-83cf-5fdf44dfda52] Running
	I1127 11:53:38.331704  175050 system_pods.go:89] "metrics-server-74d5856cc6-22vqx" [81a4a56f-67ac-41e9-a96a-4c7551acb859] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1127 11:53:38.331713  175050 system_pods.go:89] "storage-provisioner" [3594a7b3-6dcd-4f52-b549-49e33445b763] Running
	I1127 11:53:38.331735  175050 retry.go:31] will retry after 334.89762ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1127 11:53:38.672482  175050 system_pods.go:86] 4 kube-system pods found
	I1127 11:53:38.672513  175050 system_pods.go:89] "coredns-5644d7b6d9-745v8" [5fe79e34-b269-4c80-879f-d2a92b10bb2d] Running
	I1127 11:53:38.672519  175050 system_pods.go:89] "kube-proxy-ddg5h" [9776e525-930f-44c8-83cf-5fdf44dfda52] Running
	I1127 11:53:38.672529  175050 system_pods.go:89] "metrics-server-74d5856cc6-22vqx" [81a4a56f-67ac-41e9-a96a-4c7551acb859] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1127 11:53:38.672536  175050 system_pods.go:89] "storage-provisioner" [3594a7b3-6dcd-4f52-b549-49e33445b763] Running
	I1127 11:53:38.672561  175050 retry.go:31] will retry after 436.777325ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1127 11:53:39.114504  175050 system_pods.go:86] 4 kube-system pods found
	I1127 11:53:39.114530  175050 system_pods.go:89] "coredns-5644d7b6d9-745v8" [5fe79e34-b269-4c80-879f-d2a92b10bb2d] Running
	I1127 11:53:39.114535  175050 system_pods.go:89] "kube-proxy-ddg5h" [9776e525-930f-44c8-83cf-5fdf44dfda52] Running
	I1127 11:53:39.114542  175050 system_pods.go:89] "metrics-server-74d5856cc6-22vqx" [81a4a56f-67ac-41e9-a96a-4c7551acb859] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1127 11:53:39.114547  175050 system_pods.go:89] "storage-provisioner" [3594a7b3-6dcd-4f52-b549-49e33445b763] Running
	I1127 11:53:39.114566  175050 retry.go:31] will retry after 555.859852ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1127 11:53:39.674724  175050 system_pods.go:86] 4 kube-system pods found
	I1127 11:53:39.674750  175050 system_pods.go:89] "coredns-5644d7b6d9-745v8" [5fe79e34-b269-4c80-879f-d2a92b10bb2d] Running
	I1127 11:53:39.674755  175050 system_pods.go:89] "kube-proxy-ddg5h" [9776e525-930f-44c8-83cf-5fdf44dfda52] Running
	I1127 11:53:39.674761  175050 system_pods.go:89] "metrics-server-74d5856cc6-22vqx" [81a4a56f-67ac-41e9-a96a-4c7551acb859] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1127 11:53:39.674766  175050 system_pods.go:89] "storage-provisioner" [3594a7b3-6dcd-4f52-b549-49e33445b763] Running
	I1127 11:53:39.674785  175050 retry.go:31] will retry after 590.819321ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1127 11:53:40.271341  175050 system_pods.go:86] 4 kube-system pods found
	I1127 11:53:40.271375  175050 system_pods.go:89] "coredns-5644d7b6d9-745v8" [5fe79e34-b269-4c80-879f-d2a92b10bb2d] Running
	I1127 11:53:40.271383  175050 system_pods.go:89] "kube-proxy-ddg5h" [9776e525-930f-44c8-83cf-5fdf44dfda52] Running
	I1127 11:53:40.271393  175050 system_pods.go:89] "metrics-server-74d5856cc6-22vqx" [81a4a56f-67ac-41e9-a96a-4c7551acb859] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1127 11:53:40.271401  175050 system_pods.go:89] "storage-provisioner" [3594a7b3-6dcd-4f52-b549-49e33445b763] Running
	I1127 11:53:40.271425  175050 retry.go:31] will retry after 790.72964ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1127 11:53:41.067330  175050 system_pods.go:86] 4 kube-system pods found
	I1127 11:53:41.067354  175050 system_pods.go:89] "coredns-5644d7b6d9-745v8" [5fe79e34-b269-4c80-879f-d2a92b10bb2d] Running
	I1127 11:53:41.067359  175050 system_pods.go:89] "kube-proxy-ddg5h" [9776e525-930f-44c8-83cf-5fdf44dfda52] Running
	I1127 11:53:41.067365  175050 system_pods.go:89] "metrics-server-74d5856cc6-22vqx" [81a4a56f-67ac-41e9-a96a-4c7551acb859] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1127 11:53:41.067370  175050 system_pods.go:89] "storage-provisioner" [3594a7b3-6dcd-4f52-b549-49e33445b763] Running
	I1127 11:53:41.067389  175050 retry.go:31] will retry after 1.091049069s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1127 11:53:42.168989  175050 system_pods.go:86] 4 kube-system pods found
	I1127 11:53:42.169020  175050 system_pods.go:89] "coredns-5644d7b6d9-745v8" [5fe79e34-b269-4c80-879f-d2a92b10bb2d] Running
	I1127 11:53:42.169026  175050 system_pods.go:89] "kube-proxy-ddg5h" [9776e525-930f-44c8-83cf-5fdf44dfda52] Running
	I1127 11:53:42.169035  175050 system_pods.go:89] "metrics-server-74d5856cc6-22vqx" [81a4a56f-67ac-41e9-a96a-4c7551acb859] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1127 11:53:42.169040  175050 system_pods.go:89] "storage-provisioner" [3594a7b3-6dcd-4f52-b549-49e33445b763] Running
	I1127 11:53:42.169057  175050 retry.go:31] will retry after 1.300680311s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1127 11:53:43.475591  175050 system_pods.go:86] 4 kube-system pods found
	I1127 11:53:43.475628  175050 system_pods.go:89] "coredns-5644d7b6d9-745v8" [5fe79e34-b269-4c80-879f-d2a92b10bb2d] Running
	I1127 11:53:43.475636  175050 system_pods.go:89] "kube-proxy-ddg5h" [9776e525-930f-44c8-83cf-5fdf44dfda52] Running
	I1127 11:53:43.475646  175050 system_pods.go:89] "metrics-server-74d5856cc6-22vqx" [81a4a56f-67ac-41e9-a96a-4c7551acb859] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1127 11:53:43.475653  175050 system_pods.go:89] "storage-provisioner" [3594a7b3-6dcd-4f52-b549-49e33445b763] Running
	I1127 11:53:43.475675  175050 retry.go:31] will retry after 1.4286434s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1127 11:53:44.909386  175050 system_pods.go:86] 4 kube-system pods found
	I1127 11:53:44.909416  175050 system_pods.go:89] "coredns-5644d7b6d9-745v8" [5fe79e34-b269-4c80-879f-d2a92b10bb2d] Running
	I1127 11:53:44.909421  175050 system_pods.go:89] "kube-proxy-ddg5h" [9776e525-930f-44c8-83cf-5fdf44dfda52] Running
	I1127 11:53:44.909428  175050 system_pods.go:89] "metrics-server-74d5856cc6-22vqx" [81a4a56f-67ac-41e9-a96a-4c7551acb859] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1127 11:53:44.909432  175050 system_pods.go:89] "storage-provisioner" [3594a7b3-6dcd-4f52-b549-49e33445b763] Running
	I1127 11:53:44.909448  175050 retry.go:31] will retry after 1.761753845s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1127 11:53:46.676067  175050 system_pods.go:86] 4 kube-system pods found
	I1127 11:53:46.676095  175050 system_pods.go:89] "coredns-5644d7b6d9-745v8" [5fe79e34-b269-4c80-879f-d2a92b10bb2d] Running
	I1127 11:53:46.676100  175050 system_pods.go:89] "kube-proxy-ddg5h" [9776e525-930f-44c8-83cf-5fdf44dfda52] Running
	I1127 11:53:46.676108  175050 system_pods.go:89] "metrics-server-74d5856cc6-22vqx" [81a4a56f-67ac-41e9-a96a-4c7551acb859] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1127 11:53:46.676113  175050 system_pods.go:89] "storage-provisioner" [3594a7b3-6dcd-4f52-b549-49e33445b763] Running
	I1127 11:53:46.676129  175050 retry.go:31] will retry after 2.29396898s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1127 11:53:48.976293  175050 system_pods.go:86] 4 kube-system pods found
	I1127 11:53:48.976322  175050 system_pods.go:89] "coredns-5644d7b6d9-745v8" [5fe79e34-b269-4c80-879f-d2a92b10bb2d] Running
	I1127 11:53:48.976327  175050 system_pods.go:89] "kube-proxy-ddg5h" [9776e525-930f-44c8-83cf-5fdf44dfda52] Running
	I1127 11:53:48.976334  175050 system_pods.go:89] "metrics-server-74d5856cc6-22vqx" [81a4a56f-67ac-41e9-a96a-4c7551acb859] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1127 11:53:48.976339  175050 system_pods.go:89] "storage-provisioner" [3594a7b3-6dcd-4f52-b549-49e33445b763] Running
	I1127 11:53:48.976356  175050 retry.go:31] will retry after 2.810478582s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1127 11:53:51.792108  175050 system_pods.go:86] 4 kube-system pods found
	I1127 11:53:51.792135  175050 system_pods.go:89] "coredns-5644d7b6d9-745v8" [5fe79e34-b269-4c80-879f-d2a92b10bb2d] Running
	I1127 11:53:51.792142  175050 system_pods.go:89] "kube-proxy-ddg5h" [9776e525-930f-44c8-83cf-5fdf44dfda52] Running
	I1127 11:53:51.792149  175050 system_pods.go:89] "metrics-server-74d5856cc6-22vqx" [81a4a56f-67ac-41e9-a96a-4c7551acb859] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1127 11:53:51.792154  175050 system_pods.go:89] "storage-provisioner" [3594a7b3-6dcd-4f52-b549-49e33445b763] Running
	I1127 11:53:51.792171  175050 retry.go:31] will retry after 3.619573725s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1127 11:53:55.416200  175050 system_pods.go:86] 4 kube-system pods found
	I1127 11:53:55.416226  175050 system_pods.go:89] "coredns-5644d7b6d9-745v8" [5fe79e34-b269-4c80-879f-d2a92b10bb2d] Running
	I1127 11:53:55.416233  175050 system_pods.go:89] "kube-proxy-ddg5h" [9776e525-930f-44c8-83cf-5fdf44dfda52] Running
	I1127 11:53:55.416243  175050 system_pods.go:89] "metrics-server-74d5856cc6-22vqx" [81a4a56f-67ac-41e9-a96a-4c7551acb859] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1127 11:53:55.416250  175050 system_pods.go:89] "storage-provisioner" [3594a7b3-6dcd-4f52-b549-49e33445b763] Running
	I1127 11:53:55.416273  175050 retry.go:31] will retry after 4.309562755s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1127 11:53:59.731664  175050 system_pods.go:86] 4 kube-system pods found
	I1127 11:53:59.731732  175050 system_pods.go:89] "coredns-5644d7b6d9-745v8" [5fe79e34-b269-4c80-879f-d2a92b10bb2d] Running
	I1127 11:53:59.731742  175050 system_pods.go:89] "kube-proxy-ddg5h" [9776e525-930f-44c8-83cf-5fdf44dfda52] Running
	I1127 11:53:59.731749  175050 system_pods.go:89] "metrics-server-74d5856cc6-22vqx" [81a4a56f-67ac-41e9-a96a-4c7551acb859] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1127 11:53:59.731759  175050 system_pods.go:89] "storage-provisioner" [3594a7b3-6dcd-4f52-b549-49e33445b763] Running
	I1127 11:53:59.731776  175050 retry.go:31] will retry after 6.485513602s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1127 11:54:06.222613  175050 system_pods.go:86] 4 kube-system pods found
	I1127 11:54:06.222641  175050 system_pods.go:89] "coredns-5644d7b6d9-745v8" [5fe79e34-b269-4c80-879f-d2a92b10bb2d] Running
	I1127 11:54:06.222647  175050 system_pods.go:89] "kube-proxy-ddg5h" [9776e525-930f-44c8-83cf-5fdf44dfda52] Running
	I1127 11:54:06.222654  175050 system_pods.go:89] "metrics-server-74d5856cc6-22vqx" [81a4a56f-67ac-41e9-a96a-4c7551acb859] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1127 11:54:06.222659  175050 system_pods.go:89] "storage-provisioner" [3594a7b3-6dcd-4f52-b549-49e33445b763] Running
	I1127 11:54:06.222675  175050 retry.go:31] will retry after 8.5806572s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1127 11:54:14.808683  175050 system_pods.go:86] 4 kube-system pods found
	I1127 11:54:14.808711  175050 system_pods.go:89] "coredns-5644d7b6d9-745v8" [5fe79e34-b269-4c80-879f-d2a92b10bb2d] Running
	I1127 11:54:14.808719  175050 system_pods.go:89] "kube-proxy-ddg5h" [9776e525-930f-44c8-83cf-5fdf44dfda52] Running
	I1127 11:54:14.808729  175050 system_pods.go:89] "metrics-server-74d5856cc6-22vqx" [81a4a56f-67ac-41e9-a96a-4c7551acb859] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1127 11:54:14.808735  175050 system_pods.go:89] "storage-provisioner" [3594a7b3-6dcd-4f52-b549-49e33445b763] Running
	I1127 11:54:14.808755  175050 retry.go:31] will retry after 10.097717458s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1127 11:54:24.911451  175050 system_pods.go:86] 5 kube-system pods found
	I1127 11:54:24.911482  175050 system_pods.go:89] "coredns-5644d7b6d9-745v8" [5fe79e34-b269-4c80-879f-d2a92b10bb2d] Running
	I1127 11:54:24.911490  175050 system_pods.go:89] "kube-proxy-ddg5h" [9776e525-930f-44c8-83cf-5fdf44dfda52] Running
	I1127 11:54:24.911496  175050 system_pods.go:89] "kube-scheduler-old-k8s-version-337707" [a14b1960-9c68-4b72-82be-e8bc3b4fd96c] Running
	I1127 11:54:24.911508  175050 system_pods.go:89] "metrics-server-74d5856cc6-22vqx" [81a4a56f-67ac-41e9-a96a-4c7551acb859] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1127 11:54:24.911515  175050 system_pods.go:89] "storage-provisioner" [3594a7b3-6dcd-4f52-b549-49e33445b763] Running
	I1127 11:54:24.911537  175050 retry.go:31] will retry after 10.272435552s: missing components: etcd, kube-apiserver, kube-controller-manager
	I1127 11:54:35.188628  175050 system_pods.go:86] 6 kube-system pods found
	I1127 11:54:35.188656  175050 system_pods.go:89] "coredns-5644d7b6d9-745v8" [5fe79e34-b269-4c80-879f-d2a92b10bb2d] Running
	I1127 11:54:35.188667  175050 system_pods.go:89] "kube-apiserver-old-k8s-version-337707" [69568471-ae73-475d-92eb-8742b7a2f967] Running
	I1127 11:54:35.188673  175050 system_pods.go:89] "kube-proxy-ddg5h" [9776e525-930f-44c8-83cf-5fdf44dfda52] Running
	I1127 11:54:35.188679  175050 system_pods.go:89] "kube-scheduler-old-k8s-version-337707" [a14b1960-9c68-4b72-82be-e8bc3b4fd96c] Running
	I1127 11:54:35.188687  175050 system_pods.go:89] "metrics-server-74d5856cc6-22vqx" [81a4a56f-67ac-41e9-a96a-4c7551acb859] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1127 11:54:35.188693  175050 system_pods.go:89] "storage-provisioner" [3594a7b3-6dcd-4f52-b549-49e33445b763] Running
	I1127 11:54:35.188715  175050 retry.go:31] will retry after 14.968448012s: missing components: etcd, kube-controller-manager
	I1127 11:54:50.163509  175050 system_pods.go:86] 8 kube-system pods found
	I1127 11:54:50.163536  175050 system_pods.go:89] "coredns-5644d7b6d9-745v8" [5fe79e34-b269-4c80-879f-d2a92b10bb2d] Running
	I1127 11:54:50.163542  175050 system_pods.go:89] "etcd-old-k8s-version-337707" [f1493a29-c9b1-4fd7-8153-a8171bcbc7c4] Running
	I1127 11:54:50.163546  175050 system_pods.go:89] "kube-apiserver-old-k8s-version-337707" [69568471-ae73-475d-92eb-8742b7a2f967] Running
	I1127 11:54:50.163550  175050 system_pods.go:89] "kube-controller-manager-old-k8s-version-337707" [f16713ef-63d1-4ef6-b4ad-f240f8b2412e] Running
	I1127 11:54:50.163554  175050 system_pods.go:89] "kube-proxy-ddg5h" [9776e525-930f-44c8-83cf-5fdf44dfda52] Running
	I1127 11:54:50.163558  175050 system_pods.go:89] "kube-scheduler-old-k8s-version-337707" [a14b1960-9c68-4b72-82be-e8bc3b4fd96c] Running
	I1127 11:54:50.163564  175050 system_pods.go:89] "metrics-server-74d5856cc6-22vqx" [81a4a56f-67ac-41e9-a96a-4c7551acb859] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1127 11:54:50.163568  175050 system_pods.go:89] "storage-provisioner" [3594a7b3-6dcd-4f52-b549-49e33445b763] Running
	I1127 11:54:50.163576  175050 system_pods.go:126] duration metric: took 1m12.028901288s to wait for k8s-apps to be running ...
	I1127 11:54:50.163585  175050 system_svc.go:44] waiting for kubelet service to be running ....
	I1127 11:54:50.163628  175050 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1127 11:54:50.176311  175050 system_svc.go:56] duration metric: took 12.717953ms WaitForService to wait for kubelet.
	I1127 11:54:50.176334  175050 kubeadm.go:581] duration metric: took 1m19.386190718s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1127 11:54:50.176352  175050 node_conditions.go:102] verifying NodePressure condition ...
	I1127 11:54:50.179045  175050 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1127 11:54:50.179070  175050 node_conditions.go:123] node cpu capacity is 2
	I1127 11:54:50.179082  175050 node_conditions.go:105] duration metric: took 2.725782ms to run NodePressure ...
	I1127 11:54:50.179093  175050 start.go:228] waiting for startup goroutines ...
	I1127 11:54:50.179100  175050 start.go:233] waiting for cluster config update ...
	I1127 11:54:50.179109  175050 start.go:242] writing updated cluster config ...
	I1127 11:54:50.179435  175050 ssh_runner.go:195] Run: rm -f paused
	I1127 11:54:50.226296  175050 start.go:600] kubectl: 1.28.4, cluster: 1.16.0 (minor skew: 12)
	I1127 11:54:50.227959  175050 out.go:177] 
	W1127 11:54:50.229428  175050 out.go:239] ! /usr/local/bin/kubectl is version 1.28.4, which may have incompatibilities with Kubernetes 1.16.0.
	I1127 11:54:50.230905  175050 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I1127 11:54:50.232532  175050 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-337707" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Journal begins at Mon 2023-11-27 11:47:26 UTC, ends at Mon 2023-11-27 11:55:01 UTC. --
	Nov 27 11:53:48 old-k8s-version-337707 dockerd[1070]: time="2023-11-27T11:53:48.842469856Z" level=info msg="shim disconnected" id=5cf5d9d0429e5a480cee10f10bc6f8b4beb80a49a213b0eca343f5ace6a6bb8e namespace=moby
	Nov 27 11:53:48 old-k8s-version-337707 dockerd[1070]: time="2023-11-27T11:53:48.842648895Z" level=warning msg="cleaning up after shim disconnected" id=5cf5d9d0429e5a480cee10f10bc6f8b4beb80a49a213b0eca343f5ace6a6bb8e namespace=moby
	Nov 27 11:53:48 old-k8s-version-337707 dockerd[1070]: time="2023-11-27T11:53:48.842920999Z" level=info msg="cleaning up dead shim" namespace=moby
	Nov 27 11:54:09 old-k8s-version-337707 dockerd[1070]: time="2023-11-27T11:54:09.811357772Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Nov 27 11:54:09 old-k8s-version-337707 dockerd[1070]: time="2023-11-27T11:54:09.811527956Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Nov 27 11:54:09 old-k8s-version-337707 dockerd[1070]: time="2023-11-27T11:54:09.811545270Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Nov 27 11:54:09 old-k8s-version-337707 dockerd[1070]: time="2023-11-27T11:54:09.811554079Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Nov 27 11:54:10 old-k8s-version-337707 dockerd[1070]: time="2023-11-27T11:54:10.183259643Z" level=info msg="shim disconnected" id=a79f748de6f3bcad44cad773ee20ccb540fd9cac0334ab223220d375d9f6edf7 namespace=moby
	Nov 27 11:54:10 old-k8s-version-337707 dockerd[1070]: time="2023-11-27T11:54:10.183334942Z" level=warning msg="cleaning up after shim disconnected" id=a79f748de6f3bcad44cad773ee20ccb540fd9cac0334ab223220d375d9f6edf7 namespace=moby
	Nov 27 11:54:10 old-k8s-version-337707 dockerd[1070]: time="2023-11-27T11:54:10.183346693Z" level=info msg="cleaning up dead shim" namespace=moby
	Nov 27 11:54:10 old-k8s-version-337707 dockerd[1063]: time="2023-11-27T11:54:10.183624378Z" level=info msg="ignoring event" container=a79f748de6f3bcad44cad773ee20ccb540fd9cac0334ab223220d375d9f6edf7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 27 11:54:11 old-k8s-version-337707 dockerd[1063]: time="2023-11-27T11:54:11.749220421Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Nov 27 11:54:11 old-k8s-version-337707 dockerd[1063]: time="2023-11-27T11:54:11.749583943Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Nov 27 11:54:11 old-k8s-version-337707 dockerd[1063]: time="2023-11-27T11:54:11.752542266Z" level=error msg="Handler for POST /images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Nov 27 11:54:43 old-k8s-version-337707 dockerd[1070]: time="2023-11-27T11:54:43.804177810Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Nov 27 11:54:43 old-k8s-version-337707 dockerd[1070]: time="2023-11-27T11:54:43.804471801Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Nov 27 11:54:43 old-k8s-version-337707 dockerd[1070]: time="2023-11-27T11:54:43.804554896Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Nov 27 11:54:43 old-k8s-version-337707 dockerd[1070]: time="2023-11-27T11:54:43.804640271Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Nov 27 11:54:44 old-k8s-version-337707 dockerd[1063]: time="2023-11-27T11:54:44.201342518Z" level=info msg="ignoring event" container=f931196d3f5b3b679f030e2ec3b420e27cc3c52a7a3179119e41d725f411961d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 27 11:54:44 old-k8s-version-337707 dockerd[1070]: time="2023-11-27T11:54:44.206064565Z" level=info msg="shim disconnected" id=f931196d3f5b3b679f030e2ec3b420e27cc3c52a7a3179119e41d725f411961d namespace=moby
	Nov 27 11:54:44 old-k8s-version-337707 dockerd[1070]: time="2023-11-27T11:54:44.206187658Z" level=warning msg="cleaning up after shim disconnected" id=f931196d3f5b3b679f030e2ec3b420e27cc3c52a7a3179119e41d725f411961d namespace=moby
	Nov 27 11:54:44 old-k8s-version-337707 dockerd[1070]: time="2023-11-27T11:54:44.206231726Z" level=info msg="cleaning up dead shim" namespace=moby
	Nov 27 11:54:56 old-k8s-version-337707 dockerd[1063]: time="2023-11-27T11:54:56.735856179Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Nov 27 11:54:56 old-k8s-version-337707 dockerd[1063]: time="2023-11-27T11:54:56.736222447Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Nov 27 11:54:56 old-k8s-version-337707 dockerd[1063]: time="2023-11-27T11:54:56.738841608Z" level=error msg="Handler for POST /images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	
	* 
	* ==> container status <==
	* time="2023-11-27T11:55:01Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/dockershim.sock\": rpc error: code = Unimplemented desc = unknown service runtime.v1.RuntimeService"
	CONTAINER ID   IMAGE                    COMMAND                  CREATED              STATUS                      PORTS     NAMES
	f931196d3f5b   a90209bb39e3             "nginx -g 'daemon of…"   18 seconds ago       Exited (1) 17 seconds ago             k8s_dashboard-metrics-scraper_dashboard-metrics-scraper-d6b4b5544-28k4h_kubernetes-dashboard_426e9032-4343-4072-814b-ca5a2527abb0_3
	cb8aa0192e55   kubernetesui/dashboard   "/dashboard --insecu…"   About a minute ago   Up About a minute                     k8s_kubernetes-dashboard_kubernetes-dashboard-84b68f675b-wz5zn_kubernetes-dashboard_43213743-ecae-4e1e-855d-0d784743a492_0
	40594328a362   k8s.gcr.io/pause:3.1     "/pause"                 About a minute ago   Up About a minute                     k8s_POD_kubernetes-dashboard-84b68f675b-wz5zn_kubernetes-dashboard_43213743-ecae-4e1e-855d-0d784743a492_0
	487b043288ad   k8s.gcr.io/pause:3.1     "/pause"                 About a minute ago   Up About a minute                     k8s_POD_dashboard-metrics-scraper-d6b4b5544-28k4h_kubernetes-dashboard_426e9032-4343-4072-814b-ca5a2527abb0_0
	cbfacdae26c7   k8s.gcr.io/pause:3.1     "/pause"                 About a minute ago   Up About a minute                     k8s_POD_metrics-server-74d5856cc6-22vqx_kube-system_81a4a56f-67ac-41e9-a96a-4c7551acb859_0
	faf77483d5ca   6e38f40d628d             "/storage-provisioner"   About a minute ago   Up About a minute                     k8s_storage-provisioner_storage-provisioner_kube-system_3594a7b3-6dcd-4f52-b549-49e33445b763_0
	427b796fcdc8   k8s.gcr.io/pause:3.1     "/pause"                 About a minute ago   Up About a minute                     k8s_POD_storage-provisioner_kube-system_3594a7b3-6dcd-4f52-b549-49e33445b763_0
	4c9d3ac136d5   bf261d157914             "/coredns -conf /etc…"   About a minute ago   Up About a minute                     k8s_coredns_coredns-5644d7b6d9-745v8_kube-system_5fe79e34-b269-4c80-879f-d2a92b10bb2d_0
	bc5454dfccd9   k8s.gcr.io/pause:3.1     "/pause"                 About a minute ago   Up About a minute                     k8s_POD_coredns-5644d7b6d9-745v8_kube-system_5fe79e34-b269-4c80-879f-d2a92b10bb2d_0
	8356ab73732c   c21b0c7400f9             "/usr/local/bin/kube…"   About a minute ago   Up About a minute                     k8s_kube-proxy_kube-proxy-ddg5h_kube-system_9776e525-930f-44c8-83cf-5fdf44dfda52_0
	82c92a93fb6d   k8s.gcr.io/pause:3.1     "/pause"                 About a minute ago   Up About a minute                     k8s_POD_kube-proxy-ddg5h_kube-system_9776e525-930f-44c8-83cf-5fdf44dfda52_0
	1268e7650654   301ddc62b80b             "kube-scheduler --au…"   About a minute ago   Up About a minute                     k8s_kube-scheduler_kube-scheduler-old-k8s-version-337707_kube-system_b3d303074fe0ca1d42a8bd9ed248df09_0
	43f28212b92f   06a629a7e51c             "kube-controller-man…"   About a minute ago   Up About a minute                     k8s_kube-controller-manager_kube-controller-manager-old-k8s-version-337707_kube-system_7376ddb4f190a0ded9394063437bcb4e_0
	0d750e758146   b2756210eeab             "etcd --advertise-cl…"   About a minute ago   Up About a minute                     k8s_etcd_etcd-old-k8s-version-337707_kube-system_711520f4b61075a07cc16622c65311ec_0
	4de05354b8e1   b305571ca60a             "kube-apiserver --ad…"   About a minute ago   Up About a minute                     k8s_kube-apiserver_kube-apiserver-old-k8s-version-337707_kube-system_92bfd2604d59b12a37262d6308920f18_0
	b9adae203bed   k8s.gcr.io/pause:3.1     "/pause"                 About a minute ago   Up About a minute                     k8s_POD_kube-scheduler-old-k8s-version-337707_kube-system_b3d303074fe0ca1d42a8bd9ed248df09_0
	d1c80df84b70   k8s.gcr.io/pause:3.1     "/pause"                 About a minute ago   Up About a minute                     k8s_POD_kube-controller-manager-old-k8s-version-337707_kube-system_7376ddb4f190a0ded9394063437bcb4e_0
	29f17a06d7b4   k8s.gcr.io/pause:3.1     "/pause"                 About a minute ago   Up About a minute                     k8s_POD_kube-apiserver-old-k8s-version-337707_kube-system_92bfd2604d59b12a37262d6308920f18_0
	58f39b3e2a19   k8s.gcr.io/pause:3.1     "/pause"                 About a minute ago   Up About a minute                     k8s_POD_etcd-old-k8s-version-337707_kube-system_711520f4b61075a07cc16622c65311ec_0
	
	* 
	* ==> coredns [4c9d3ac136d5] <==
	* .:53
	2023-11-27T11:53:32.573Z [INFO] plugin/reload: Running configuration MD5 = f64cb9b977c7dfca58c4fab108535a76
	2023-11-27T11:53:32.573Z [INFO] CoreDNS-1.6.2
	2023-11-27T11:53:32.573Z [INFO] linux/amd64, go1.12.8, 795a3eb
	CoreDNS-1.6.2
	linux/amd64, go1.12.8, 795a3eb
	[INFO] Reloading
	2023-11-27T11:54:06.753Z [INFO] plugin/reload: Running configuration MD5 = 7bc8613a521eb1a1737fc3e7c0fea3ca
	[INFO] Reloading complete
	2023-11-27T11:54:06.787Z [INFO] 127.0.0.1:33256 - 49215 "HINFO IN 2491153908171116205.2952303699545350598. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.033710014s
	
	* 
	* ==> describe nodes <==
	* Name:               old-k8s-version-337707
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-337707
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=81390b5609e7feb2151fde4633273d04eb05a21f
	                    minikube.k8s.io/name=old-k8s-version-337707
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_11_27T11_53_15_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Nov 2023 11:53:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Nov 2023 11:54:11 +0000   Mon, 27 Nov 2023 11:53:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Nov 2023 11:54:11 +0000   Mon, 27 Nov 2023 11:53:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Nov 2023 11:54:11 +0000   Mon, 27 Nov 2023 11:53:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Nov 2023 11:54:11 +0000   Mon, 27 Nov 2023 11:53:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.126
	  Hostname:    old-k8s-version-337707
	Capacity:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	Allocatable:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	System Info:
	 Machine ID:                 55d017fe0ee54d3c98062b08e801a39e
	 System UUID:                55d017fe-0ee5-4d3c-9806-2b08e801a39e
	 Boot ID:                    31d151ae-4ee7-4f43-b1d2-8341d0c8e89d
	 Kernel Version:             5.10.57
	 OS Image:                   Buildroot 2021.02.12
	 Operating System:           linux
	 Architecture:               amd64
	 Container Runtime Version:  docker://24.0.7
	 Kubelet Version:            v1.16.0
	 Kube-Proxy Version:         v1.16.0
	PodCIDR:                     10.244.0.0/24
	PodCIDRs:                    10.244.0.0/24
	Non-terminated Pods:         (10 in total)
	  Namespace                  Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                  ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                coredns-5644d7b6d9-745v8                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     91s
	  kube-system                etcd-old-k8s-version-337707                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24s
	  kube-system                kube-apiserver-old-k8s-version-337707             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         33s
	  kube-system                kube-controller-manager-old-k8s-version-337707    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25s
	  kube-system                kube-proxy-ddg5h                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         91s
	  kube-system                kube-scheduler-old-k8s-version-337707             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         39s
	  kube-system                metrics-server-74d5856cc6-22vqx                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         88s
	  kube-system                storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         89s
	  kubernetes-dashboard       dashboard-metrics-scraper-d6b4b5544-28k4h         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         88s
	  kubernetes-dashboard       kubernetes-dashboard-84b68f675b-wz5zn             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         88s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                750m (37%!)(MISSING)   0 (0%!)(MISSING)
	  memory             270Mi (12%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From                                Message
	  ----    ------                   ----                 ----                                -------
	  Normal  NodeHasSufficientMemory  117s (x8 over 117s)  kubelet, old-k8s-version-337707     Node old-k8s-version-337707 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    117s (x8 over 117s)  kubelet, old-k8s-version-337707     Node old-k8s-version-337707 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     117s (x7 over 117s)  kubelet, old-k8s-version-337707     Node old-k8s-version-337707 status is now: NodeHasSufficientPID
	  Normal  Starting                 90s                  kube-proxy, old-k8s-version-337707  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.070899] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.504599] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.409790] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.135377] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.528650] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.785165] systemd-fstab-generator[498]: Ignoring "noauto" for root device
	[  +0.124325] systemd-fstab-generator[509]: Ignoring "noauto" for root device
	[  +1.277275] systemd-fstab-generator[774]: Ignoring "noauto" for root device
	[  +0.319596] systemd-fstab-generator[812]: Ignoring "noauto" for root device
	[  +0.140374] systemd-fstab-generator[823]: Ignoring "noauto" for root device
	[  +0.170034] systemd-fstab-generator[836]: Ignoring "noauto" for root device
	[  +5.795959] systemd-fstab-generator[1054]: Ignoring "noauto" for root device
	[  +3.536151] kauditd_printk_skb: 67 callbacks suppressed
	[Nov27 11:48] systemd-fstab-generator[1468]: Ignoring "noauto" for root device
	[  +0.493020] kauditd_printk_skb: 29 callbacks suppressed
	[  +0.164309] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[ +24.803623] kauditd_printk_skb: 5 callbacks suppressed
	[Nov27 11:49] hrtimer: interrupt took 6305619 ns
	[Nov27 11:53] systemd-fstab-generator[5460]: Ignoring "noauto" for root device
	[ +34.582530] kauditd_printk_skb: 4 callbacks suppressed
	
	* 
	* ==> etcd [0d750e758146] <==
	* 2023-11-27 11:53:06.313101 I | etcdserver: initial cluster = old-k8s-version-337707=https://192.168.61.126:2380
	2023-11-27 11:53:06.332114 I | etcdserver: starting member 2456aadc51424cb5 in cluster c6330389cea17d04
	2023-11-27 11:53:06.332142 I | raft: 2456aadc51424cb5 became follower at term 0
	2023-11-27 11:53:06.332154 I | raft: newRaft 2456aadc51424cb5 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	2023-11-27 11:53:06.332158 I | raft: 2456aadc51424cb5 became follower at term 1
	2023-11-27 11:53:06.340243 W | auth: simple token is not cryptographically signed
	2023-11-27 11:53:06.359609 I | etcdserver: starting server... [version: 3.3.15, cluster version: to_be_decided]
	2023-11-27 11:53:06.368344 I | etcdserver: 2456aadc51424cb5 as single-node; fast-forwarding 9 ticks (election ticks 10)
	2023-11-27 11:53:06.368895 I | etcdserver/membership: added member 2456aadc51424cb5 [https://192.168.61.126:2380] to cluster c6330389cea17d04
	2023-11-27 11:53:06.379413 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, ca = , trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-11-27 11:53:06.379989 I | embed: listening for metrics on http://192.168.61.126:2381
	2023-11-27 11:53:06.380231 I | embed: listening for metrics on http://127.0.0.1:2381
	2023-11-27 11:53:06.771858 I | raft: 2456aadc51424cb5 is starting a new election at term 1
	2023-11-27 11:53:06.807330 I | raft: 2456aadc51424cb5 became candidate at term 2
	2023-11-27 11:53:06.807514 I | raft: 2456aadc51424cb5 received MsgVoteResp from 2456aadc51424cb5 at term 2
	2023-11-27 11:53:06.807623 I | raft: 2456aadc51424cb5 became leader at term 2
	2023-11-27 11:53:06.807858 I | raft: raft.node: 2456aadc51424cb5 elected leader 2456aadc51424cb5 at term 2
	2023-11-27 11:53:06.809041 I | etcdserver: published {Name:old-k8s-version-337707 ClientURLs:[https://192.168.61.126:2379]} to cluster c6330389cea17d04
	2023-11-27 11:53:06.809128 I | embed: ready to serve client requests
	2023-11-27 11:53:06.810819 I | embed: serving client requests on 127.0.0.1:2379
	2023-11-27 11:53:06.810968 I | etcdserver: setting up the initial cluster version to 3.3
	2023-11-27 11:53:06.811327 I | embed: ready to serve client requests
	2023-11-27 11:53:06.812675 I | embed: serving client requests on 192.168.61.126:2379
	2023-11-27 11:53:06.815417 N | etcdserver/membership: set the initial cluster version to 3.3
	2023-11-27 11:53:06.815497 I | etcdserver/api: enabled capabilities for version 3.3
	
	* 
	* ==> kernel <==
	*  11:55:01 up 7 min,  0 users,  load average: 0.53, 0.81, 0.42
	Linux old-k8s-version-337707 5.10.57 #1 SMP Thu Nov 16 18:26:12 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [4de05354b8e1] <==
	* I1127 11:53:11.639188       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1127 11:53:11.660383       1 storage_scheduling.go:139] created PriorityClass system-node-critical with value 2000001000
	I1127 11:53:11.670259       1 storage_scheduling.go:139] created PriorityClass system-cluster-critical with value 2000000000
	I1127 11:53:11.670561       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I1127 11:53:12.562291       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
	I1127 11:53:13.430866       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1127 11:53:13.711120       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W1127 11:53:14.062519       1 lease.go:222] Resetting endpoints for master service "kubernetes" to [192.168.61.126]
	I1127 11:53:14.063846       1 controller.go:606] quota admission added evaluator for: endpoints
	I1127 11:53:15.012582       1 controller.go:606] quota admission added evaluator for: serviceaccounts
	I1127 11:53:15.532106       1 controller.go:606] quota admission added evaluator for: deployments.apps
	I1127 11:53:15.867865       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
	I1127 11:53:30.315020       1 controller.go:606] quota admission added evaluator for: replicasets.apps
	I1127 11:53:30.343383       1 controller.go:606] quota admission added evaluator for: events.events.k8s.io
	I1127 11:53:30.367420       1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
	I1127 11:53:34.520352       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1127 11:53:34.520433       1 handler_proxy.go:99] no RequestInfo found in the context
	E1127 11:53:34.520490       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1127 11:53:34.520524       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1127 11:54:34.520837       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1127 11:54:34.520922       1 handler_proxy.go:99] no RequestInfo found in the context
	E1127 11:54:34.520954       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1127 11:54:34.520961       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [43f28212b92f] <==
	* E1127 11:53:32.781981       1 clusterroleaggregation_controller.go:180] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
	I1127 11:53:32.862184       1 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper", UID:"5fedfd17-501d-4e6f-8b1a-9e8694cfd2b8", APIVersion:"apps/v1", ResourceVersion:"410", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set dashboard-metrics-scraper-d6b4b5544 to 1
	I1127 11:53:32.863563       1 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard", UID:"3eb0634d-d084-4995-80c3-46d1fa9f8a62", APIVersion:"apps/v1", ResourceVersion:"412", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set kubernetes-dashboard-84b68f675b to 1
	I1127 11:53:32.900107       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-d6b4b5544", UID:"2da879a4-1618-4259-aa6e-a3da8564cc58", APIVersion:"apps/v1", ResourceVersion:"413", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "dashboard-metrics-scraper-d6b4b5544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1127 11:53:32.900176       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-84b68f675b", UID:"b955b42f-c49d-4e43-b561-2e210bdc1ce8", APIVersion:"apps/v1", ResourceVersion:"414", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-84b68f675b-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E1127 11:53:32.926382       1 replica_set.go:450] Sync "kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544" failed with pods "dashboard-metrics-scraper-d6b4b5544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E1127 11:53:32.929921       1 replica_set.go:450] Sync "kubernetes-dashboard/kubernetes-dashboard-84b68f675b" failed with pods "kubernetes-dashboard-84b68f675b-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E1127 11:53:32.941453       1 replica_set.go:450] Sync "kubernetes-dashboard/kubernetes-dashboard-84b68f675b" failed with pods "kubernetes-dashboard-84b68f675b-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E1127 11:53:32.944906       1 replica_set.go:450] Sync "kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544" failed with pods "dashboard-metrics-scraper-d6b4b5544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1127 11:53:32.945344       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-84b68f675b", UID:"b955b42f-c49d-4e43-b561-2e210bdc1ce8", APIVersion:"apps/v1", ResourceVersion:"427", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-84b68f675b-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1127 11:53:32.945590       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-d6b4b5544", UID:"2da879a4-1618-4259-aa6e-a3da8564cc58", APIVersion:"apps/v1", ResourceVersion:"426", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "dashboard-metrics-scraper-d6b4b5544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E1127 11:53:32.952579       1 replica_set.go:450] Sync "kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544" failed with pods "dashboard-metrics-scraper-d6b4b5544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1127 11:53:32.952884       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-d6b4b5544", UID:"2da879a4-1618-4259-aa6e-a3da8564cc58", APIVersion:"apps/v1", ResourceVersion:"426", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "dashboard-metrics-scraper-d6b4b5544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E1127 11:53:32.954254       1 replica_set.go:450] Sync "kubernetes-dashboard/kubernetes-dashboard-84b68f675b" failed with pods "kubernetes-dashboard-84b68f675b-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1127 11:53:32.954436       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-84b68f675b", UID:"b955b42f-c49d-4e43-b561-2e210bdc1ce8", APIVersion:"apps/v1", ResourceVersion:"427", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-84b68f675b-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E1127 11:53:32.962292       1 replica_set.go:450] Sync "kubernetes-dashboard/kubernetes-dashboard-84b68f675b" failed with pods "kubernetes-dashboard-84b68f675b-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1127 11:53:32.962343       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-84b68f675b", UID:"b955b42f-c49d-4e43-b561-2e210bdc1ce8", APIVersion:"apps/v1", ResourceVersion:"427", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-84b68f675b-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1127 11:53:33.748357       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"metrics-server-74d5856cc6", UID:"84db10c2-78d9-442c-a9cd-de0fb02cef96", APIVersion:"apps/v1", ResourceVersion:"385", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: metrics-server-74d5856cc6-22vqx
	I1127 11:53:33.973237       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-d6b4b5544", UID:"2da879a4-1618-4259-aa6e-a3da8564cc58", APIVersion:"apps/v1", ResourceVersion:"426", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: dashboard-metrics-scraper-d6b4b5544-28k4h
	I1127 11:53:33.993877       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-84b68f675b", UID:"b955b42f-c49d-4e43-b561-2e210bdc1ce8", APIVersion:"apps/v1", ResourceVersion:"427", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kubernetes-dashboard-84b68f675b-wz5zn
	E1127 11:54:01.110225       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1127 11:54:03.787034       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1127 11:54:31.363163       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1127 11:54:35.788648       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1127 11:55:01.615454       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	
	* 
	* ==> kube-proxy [8356ab73732c] <==
	* W1127 11:53:31.457960       1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
	I1127 11:53:31.470561       1 node.go:135] Successfully retrieved node IP: 192.168.61.126
	I1127 11:53:31.470687       1 server_others.go:149] Using iptables Proxier.
	I1127 11:53:31.472316       1 server.go:529] Version: v1.16.0
	I1127 11:53:31.476617       1 config.go:313] Starting service config controller
	I1127 11:53:31.476699       1 shared_informer.go:197] Waiting for caches to sync for service config
	I1127 11:53:31.478100       1 config.go:131] Starting endpoints config controller
	I1127 11:53:31.478321       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I1127 11:53:31.579616       1 shared_informer.go:204] Caches are synced for endpoints config 
	I1127 11:53:31.589022       1 shared_informer.go:204] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [1268e7650654] <==
	* I1127 11:53:10.694453       1 secure_serving.go:123] Serving securely on 127.0.0.1:10259
	E1127 11:53:10.817501       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1127 11:53:10.819881       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1127 11:53:10.819971       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1127 11:53:10.820190       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1127 11:53:10.820408       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1127 11:53:10.820476       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1127 11:53:10.820680       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1127 11:53:10.821023       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1127 11:53:10.821097       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1127 11:53:10.821129       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1127 11:53:10.823937       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1127 11:53:11.819123       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1127 11:53:11.821413       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1127 11:53:11.825469       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1127 11:53:11.831446       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1127 11:53:11.834611       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1127 11:53:11.838876       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1127 11:53:11.841545       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1127 11:53:11.842852       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1127 11:53:11.844871       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1127 11:53:11.846284       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1127 11:53:11.847965       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1127 11:53:30.400296       1 factory.go:585] pod is already present in unschedulableQ
	E1127 11:53:30.413718       1 factory.go:585] pod is already present in the activeQ
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Mon 2023-11-27 11:47:26 UTC, ends at Mon 2023-11-27 11:55:02 UTC. --
	Nov 27 11:53:50 old-k8s-version-337707 kubelet[5480]: W1127 11:53:50.349227    5480 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544-28k4h through plugin: invalid network status for
	Nov 27 11:53:50 old-k8s-version-337707 kubelet[5480]: E1127 11:53:50.353132    5480 pod_workers.go:191] Error syncing pod 426e9032-4343-4072-814b-ca5a2527abb0 ("dashboard-metrics-scraper-d6b4b5544-28k4h_kubernetes-dashboard(426e9032-4343-4072-814b-ca5a2527abb0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-d6b4b5544-28k4h_kubernetes-dashboard(426e9032-4343-4072-814b-ca5a2527abb0)"
	Nov 27 11:53:55 old-k8s-version-337707 kubelet[5480]: E1127 11:53:55.302947    5480 pod_workers.go:191] Error syncing pod 426e9032-4343-4072-814b-ca5a2527abb0 ("dashboard-metrics-scraper-d6b4b5544-28k4h_kubernetes-dashboard(426e9032-4343-4072-814b-ca5a2527abb0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-d6b4b5544-28k4h_kubernetes-dashboard(426e9032-4343-4072-814b-ca5a2527abb0)"
	Nov 27 11:54:00 old-k8s-version-337707 kubelet[5480]: E1127 11:54:00.704889    5480 pod_workers.go:191] Error syncing pod 81a4a56f-67ac-41e9-a96a-4c7551acb859 ("metrics-server-74d5856cc6-22vqx_kube-system(81a4a56f-67ac-41e9-a96a-4c7551acb859)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 27 11:54:10 old-k8s-version-337707 kubelet[5480]: W1127 11:54:10.231579    5480 container.go:409] Failed to create summary reader for "/kubepods/besteffort/pod426e9032-4343-4072-814b-ca5a2527abb0/a79f748de6f3bcad44cad773ee20ccb540fd9cac0334ab223220d375d9f6edf7": none of the resources are being tracked.
	Nov 27 11:54:10 old-k8s-version-337707 kubelet[5480]: W1127 11:54:10.474050    5480 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544-28k4h through plugin: invalid network status for
	Nov 27 11:54:10 old-k8s-version-337707 kubelet[5480]: E1127 11:54:10.479521    5480 pod_workers.go:191] Error syncing pod 426e9032-4343-4072-814b-ca5a2527abb0 ("dashboard-metrics-scraper-d6b4b5544-28k4h_kubernetes-dashboard(426e9032-4343-4072-814b-ca5a2527abb0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-d6b4b5544-28k4h_kubernetes-dashboard(426e9032-4343-4072-814b-ca5a2527abb0)"
	Nov 27 11:54:11 old-k8s-version-337707 kubelet[5480]: W1127 11:54:11.486092    5480 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544-28k4h through plugin: invalid network status for
	Nov 27 11:54:11 old-k8s-version-337707 kubelet[5480]: E1127 11:54:11.753140    5480 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host
	Nov 27 11:54:11 old-k8s-version-337707 kubelet[5480]: E1127 11:54:11.753245    5480 kuberuntime_image.go:50] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host
	Nov 27 11:54:11 old-k8s-version-337707 kubelet[5480]: E1127 11:54:11.753314    5480 kuberuntime_manager.go:783] container start failed: ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host
	Nov 27 11:54:11 old-k8s-version-337707 kubelet[5480]: E1127 11:54:11.753379    5480 pod_workers.go:191] Error syncing pod 81a4a56f-67ac-41e9-a96a-4c7551acb859 ("metrics-server-74d5856cc6-22vqx_kube-system(81a4a56f-67ac-41e9-a96a-4c7551acb859)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Nov 27 11:54:15 old-k8s-version-337707 kubelet[5480]: E1127 11:54:15.302942    5480 pod_workers.go:191] Error syncing pod 426e9032-4343-4072-814b-ca5a2527abb0 ("dashboard-metrics-scraper-d6b4b5544-28k4h_kubernetes-dashboard(426e9032-4343-4072-814b-ca5a2527abb0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-d6b4b5544-28k4h_kubernetes-dashboard(426e9032-4343-4072-814b-ca5a2527abb0)"
	Nov 27 11:54:26 old-k8s-version-337707 kubelet[5480]: E1127 11:54:26.707663    5480 pod_workers.go:191] Error syncing pod 81a4a56f-67ac-41e9-a96a-4c7551acb859 ("metrics-server-74d5856cc6-22vqx_kube-system(81a4a56f-67ac-41e9-a96a-4c7551acb859)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 27 11:54:28 old-k8s-version-337707 kubelet[5480]: E1127 11:54:28.704443    5480 pod_workers.go:191] Error syncing pod 426e9032-4343-4072-814b-ca5a2527abb0 ("dashboard-metrics-scraper-d6b4b5544-28k4h_kubernetes-dashboard(426e9032-4343-4072-814b-ca5a2527abb0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-d6b4b5544-28k4h_kubernetes-dashboard(426e9032-4343-4072-814b-ca5a2527abb0)"
	Nov 27 11:54:41 old-k8s-version-337707 kubelet[5480]: E1127 11:54:41.704199    5480 pod_workers.go:191] Error syncing pod 81a4a56f-67ac-41e9-a96a-4c7551acb859 ("metrics-server-74d5856cc6-22vqx_kube-system(81a4a56f-67ac-41e9-a96a-4c7551acb859)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 27 11:54:44 old-k8s-version-337707 kubelet[5480]: W1127 11:54:44.721279    5480 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544-28k4h through plugin: invalid network status for
	Nov 27 11:54:44 old-k8s-version-337707 kubelet[5480]: E1127 11:54:44.731835    5480 pod_workers.go:191] Error syncing pod 426e9032-4343-4072-814b-ca5a2527abb0 ("dashboard-metrics-scraper-d6b4b5544-28k4h_kubernetes-dashboard(426e9032-4343-4072-814b-ca5a2527abb0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-d6b4b5544-28k4h_kubernetes-dashboard(426e9032-4343-4072-814b-ca5a2527abb0)"
	Nov 27 11:54:45 old-k8s-version-337707 kubelet[5480]: W1127 11:54:45.737999    5480 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544-28k4h through plugin: invalid network status for
	Nov 27 11:54:45 old-k8s-version-337707 kubelet[5480]: E1127 11:54:45.743812    5480 pod_workers.go:191] Error syncing pod 426e9032-4343-4072-814b-ca5a2527abb0 ("dashboard-metrics-scraper-d6b4b5544-28k4h_kubernetes-dashboard(426e9032-4343-4072-814b-ca5a2527abb0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-d6b4b5544-28k4h_kubernetes-dashboard(426e9032-4343-4072-814b-ca5a2527abb0)"
	Nov 27 11:54:56 old-k8s-version-337707 kubelet[5480]: E1127 11:54:56.739531    5480 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host
	Nov 27 11:54:56 old-k8s-version-337707 kubelet[5480]: E1127 11:54:56.739650    5480 kuberuntime_image.go:50] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host
	Nov 27 11:54:56 old-k8s-version-337707 kubelet[5480]: E1127 11:54:56.739691    5480 kuberuntime_manager.go:783] container start failed: ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host
	Nov 27 11:54:56 old-k8s-version-337707 kubelet[5480]: E1127 11:54:56.742824    5480 pod_workers.go:191] Error syncing pod 81a4a56f-67ac-41e9-a96a-4c7551acb859 ("metrics-server-74d5856cc6-22vqx_kube-system(81a4a56f-67ac-41e9-a96a-4c7551acb859)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Nov 27 11:54:58 old-k8s-version-337707 kubelet[5480]: E1127 11:54:58.703486    5480 pod_workers.go:191] Error syncing pod 426e9032-4343-4072-814b-ca5a2527abb0 ("dashboard-metrics-scraper-d6b4b5544-28k4h_kubernetes-dashboard(426e9032-4343-4072-814b-ca5a2527abb0)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-d6b4b5544-28k4h_kubernetes-dashboard(426e9032-4343-4072-814b-ca5a2527abb0)"
	
	* 
	* ==> kubernetes-dashboard [cb8aa0192e55] <==
	* 2023/11/27 11:53:42 Starting overwatch
	2023/11/27 11:53:42 Using namespace: kubernetes-dashboard
	2023/11/27 11:53:42 Using in-cluster config to connect to apiserver
	2023/11/27 11:53:42 Using secret token for csrf signing
	2023/11/27 11:53:42 Initializing csrf token from kubernetes-dashboard-csrf secret
	2023/11/27 11:53:42 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2023/11/27 11:53:42 Successful initial request to the apiserver, version: v1.16.0
	2023/11/27 11:53:42 Generating JWE encryption key
	2023/11/27 11:53:42 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2023/11/27 11:53:42 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2023/11/27 11:53:42 Initializing JWE encryption key from synchronized object
	2023/11/27 11:53:42 Creating in-cluster Sidecar client
	2023/11/27 11:53:42 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2023/11/27 11:53:42 Serving insecurely on HTTP port: 9090
	2023/11/27 11:54:12 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2023/11/27 11:54:42 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	* 
	* ==> storage-provisioner [faf77483d5ca] <==
	* I1127 11:53:33.456460       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1127 11:53:33.466066       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1127 11:53:33.466490       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1127 11:53:33.475979       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1127 11:53:33.476386       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-337707_02489a13-c2c6-474b-8bf5-ace1fec4726f!
	I1127 11:53:33.480675       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"98abe0df-03ac-4682-802d-e5228a304e84", APIVersion:"v1", ResourceVersion:"454", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-337707_02489a13-c2c6-474b-8bf5-ace1fec4726f became leader
	I1127 11:53:33.578862       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-337707_02489a13-c2c6-474b-8bf5-ace1fec4726f!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-337707 -n old-k8s-version-337707
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-337707 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-74d5856cc6-22vqx
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-337707 describe pod metrics-server-74d5856cc6-22vqx
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-337707 describe pod metrics-server-74d5856cc6-22vqx: exit status 1 (63.177045ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-74d5856cc6-22vqx" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-337707 describe pod metrics-server-74d5856cc6-22vqx: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (2.12s)

                                                
                                    

Test pass (290/322)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 5.68
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.07
10 TestDownloadOnly/v1.28.4/json-events 4.46
11 TestDownloadOnly/v1.28.4/preload-exists 0
15 TestDownloadOnly/v1.28.4/LogsDuration 0.07
16 TestDownloadOnly/DeleteAll 0.14
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.13
19 TestBinaryMirror 0.58
20 TestOffline 96.8
23 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
24 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
25 TestAddons/Setup 154.73
27 TestAddons/parallel/Registry 17.65
28 TestAddons/parallel/Ingress 22.88
29 TestAddons/parallel/InspektorGadget 10.92
30 TestAddons/parallel/MetricsServer 5.75
31 TestAddons/parallel/HelmTiller 11.47
33 TestAddons/parallel/CSI 104.97
34 TestAddons/parallel/Headlamp 16.94
35 TestAddons/parallel/CloudSpanner 5.73
36 TestAddons/parallel/LocalPath 11.92
37 TestAddons/parallel/NvidiaDevicePlugin 5.47
40 TestAddons/serial/GCPAuth/Namespaces 0.12
41 TestAddons/StoppedEnableDisable 13.41
42 TestCertOptions 106
43 TestCertExpiration 335
44 TestDockerFlags 85.99
45 TestForceSystemdFlag 51.73
46 TestForceSystemdEnv 133.58
48 TestKVMDriverInstallOrUpdate 2.92
52 TestErrorSpam/setup 52.28
53 TestErrorSpam/start 0.39
54 TestErrorSpam/status 0.86
55 TestErrorSpam/pause 1.25
56 TestErrorSpam/unpause 1.39
57 TestErrorSpam/stop 13.28
60 TestFunctional/serial/CopySyncFile 0
61 TestFunctional/serial/StartWithProxy 71.22
62 TestFunctional/serial/AuditLog 0
63 TestFunctional/serial/SoftStart 37.51
64 TestFunctional/serial/KubeContext 0.04
65 TestFunctional/serial/KubectlGetPods 0.08
68 TestFunctional/serial/CacheCmd/cache/add_remote 2.45
69 TestFunctional/serial/CacheCmd/cache/add_local 1.37
70 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
71 TestFunctional/serial/CacheCmd/cache/list 0.06
72 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.27
73 TestFunctional/serial/CacheCmd/cache/cache_reload 1.27
74 TestFunctional/serial/CacheCmd/cache/delete 0.12
75 TestFunctional/serial/MinikubeKubectlCmd 0.12
76 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
77 TestFunctional/serial/ExtraConfig 40.36
78 TestFunctional/serial/ComponentHealth 0.06
79 TestFunctional/serial/LogsCmd 1.07
80 TestFunctional/serial/LogsFileCmd 1.11
81 TestFunctional/serial/InvalidService 4.4
83 TestFunctional/parallel/ConfigCmd 0.44
84 TestFunctional/parallel/DashboardCmd 21.89
85 TestFunctional/parallel/DryRun 0.31
86 TestFunctional/parallel/InternationalLanguage 0.15
87 TestFunctional/parallel/StatusCmd 1.07
91 TestFunctional/parallel/ServiceCmdConnect 8.76
92 TestFunctional/parallel/AddonsCmd 0.16
93 TestFunctional/parallel/PersistentVolumeClaim 44.37
95 TestFunctional/parallel/SSHCmd 0.46
96 TestFunctional/parallel/CpCmd 1.01
97 TestFunctional/parallel/MySQL 39.13
98 TestFunctional/parallel/FileSync 0.27
99 TestFunctional/parallel/CertSync 1.71
103 TestFunctional/parallel/NodeLabels 0.07
105 TestFunctional/parallel/NonActiveRuntimeDisabled 0.27
107 TestFunctional/parallel/License 0.32
108 TestFunctional/parallel/Version/short 0.06
109 TestFunctional/parallel/Version/components 0.97
110 TestFunctional/parallel/ImageCommands/ImageListShort 0.2
111 TestFunctional/parallel/ImageCommands/ImageListTable 0.27
112 TestFunctional/parallel/ImageCommands/ImageListJson 0.28
113 TestFunctional/parallel/ImageCommands/ImageListYaml 0.21
114 TestFunctional/parallel/ImageCommands/ImageBuild 2.97
115 TestFunctional/parallel/ImageCommands/Setup 1.38
116 TestFunctional/parallel/DockerEnv/bash 0.98
117 TestFunctional/parallel/UpdateContextCmd/no_changes 0.1
118 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.1
119 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.11
120 TestFunctional/parallel/ServiceCmd/DeployApp 12.3
121 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.46
123 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.45
124 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
126 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 35.34
127 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.47
128 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 5.95
129 TestFunctional/parallel/ServiceCmd/List 0.45
130 TestFunctional/parallel/ServiceCmd/JSONOutput 0.47
131 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.86
132 TestFunctional/parallel/ServiceCmd/HTTPS 0.36
133 TestFunctional/parallel/ServiceCmd/Format 0.33
134 TestFunctional/parallel/ServiceCmd/URL 0.43
135 TestFunctional/parallel/ImageCommands/ImageRemove 0.53
136 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.71
137 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.9
138 TestFunctional/parallel/ProfileCmd/profile_not_create 0.34
139 TestFunctional/parallel/ProfileCmd/profile_list 0.32
140 TestFunctional/parallel/ProfileCmd/profile_json_output 0.35
141 TestFunctional/parallel/MountCmd/any-port 13.73
142 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.07
143 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
147 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
148 TestFunctional/parallel/MountCmd/specific-port 1.62
149 TestFunctional/parallel/MountCmd/VerifyCleanup 1.76
150 TestFunctional/delete_addon-resizer_images 0.07
151 TestFunctional/delete_my-image_image 0.01
152 TestFunctional/delete_minikube_cached_images 0.01
153 TestGvisorAddon 328.46
156 TestImageBuild/serial/Setup 51.42
157 TestImageBuild/serial/NormalBuild 1.62
158 TestImageBuild/serial/BuildWithBuildArg 1.36
159 TestImageBuild/serial/BuildWithDockerIgnore 0.4
160 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.3
163 TestIngressAddonLegacy/StartLegacyK8sCluster 83.21
165 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 17.42
166 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.52
167 TestIngressAddonLegacy/serial/ValidateIngressAddons 36.05
170 TestJSONOutput/start/Command 68.15
171 TestJSONOutput/start/Audit 0
173 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
174 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
176 TestJSONOutput/pause/Command 0.61
177 TestJSONOutput/pause/Audit 0
179 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
180 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
182 TestJSONOutput/unpause/Command 0.55
183 TestJSONOutput/unpause/Audit 0
185 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
186 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
188 TestJSONOutput/stop/Command 8.11
189 TestJSONOutput/stop/Audit 0
191 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
192 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
193 TestErrorJSONOutput 0.22
198 TestMainNoArgs 0.06
199 TestMinikubeProfile 102.58
202 TestMountStart/serial/StartWithMountFirst 28.82
203 TestMountStart/serial/VerifyMountFirst 0.43
204 TestMountStart/serial/StartWithMountSecond 28.06
205 TestMountStart/serial/VerifyMountSecond 0.41
206 TestMountStart/serial/DeleteFirst 0.67
207 TestMountStart/serial/VerifyMountPostDelete 0.42
208 TestMountStart/serial/Stop 2.1
209 TestMountStart/serial/RestartStopped 23.54
210 TestMountStart/serial/VerifyMountPostStop 0.42
213 TestMultiNode/serial/FreshStart2Nodes 125.28
214 TestMultiNode/serial/DeployApp2Nodes 4.94
215 TestMultiNode/serial/PingHostFrom2Pods 0.95
216 TestMultiNode/serial/AddNode 47.58
217 TestMultiNode/serial/ProfileList 0.22
218 TestMultiNode/serial/CopyFile 7.86
219 TestMultiNode/serial/StopNode 4
220 TestMultiNode/serial/StartAfterStop 32.17
221 TestMultiNode/serial/RestartKeepsNodes 171.72
222 TestMultiNode/serial/DeleteNode 1.76
223 TestMultiNode/serial/StopMultiNode 25.64
224 TestMultiNode/serial/RestartMultiNode 119.74
225 TestMultiNode/serial/ValidateNameConflict 53.33
230 TestPreload 198.75
232 TestScheduledStopUnix 123.87
233 TestSkaffold 138.95
236 TestRunningBinaryUpgrade 193.52
238 TestKubernetesUpgrade 165.32
251 TestStoppedBinaryUpgrade/Setup 0.34
252 TestStoppedBinaryUpgrade/Upgrade 204.5
254 TestPause/serial/Start 75.21
256 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
257 TestNoKubernetes/serial/StartWithK8s 88.51
265 TestNetworkPlugins/group/auto/Start 124.55
266 TestPause/serial/SecondStartNoReconfiguration 87.34
267 TestNoKubernetes/serial/StartWithStopK8s 43.79
268 TestNetworkPlugins/group/auto/KubeletFlags 0.23
269 TestNetworkPlugins/group/auto/NetCatPod 12.37
270 TestNoKubernetes/serial/Start 30.09
271 TestStoppedBinaryUpgrade/MinikubeLogs 1.73
272 TestNetworkPlugins/group/auto/DNS 0.23
273 TestNetworkPlugins/group/auto/Localhost 0.24
274 TestNetworkPlugins/group/auto/HairPin 0.22
275 TestNetworkPlugins/group/kindnet/Start 99.82
276 TestPause/serial/Pause 1
277 TestNetworkPlugins/group/calico/Start 133.94
278 TestPause/serial/VerifyStatus 0.27
279 TestPause/serial/Unpause 0.59
280 TestPause/serial/PauseAgain 0.76
281 TestPause/serial/DeletePaused 1.08
282 TestPause/serial/VerifyDeletedResources 0.73
283 TestNetworkPlugins/group/custom-flannel/Start 137.97
284 TestNoKubernetes/serial/VerifyK8sNotRunning 0.21
285 TestNoKubernetes/serial/ProfileList 0.62
286 TestNoKubernetes/serial/Stop 2.24
287 TestNoKubernetes/serial/StartNoArgs 106.75
288 TestNetworkPlugins/group/kindnet/ControllerPod 5.02
289 TestNetworkPlugins/group/kindnet/KubeletFlags 0.37
290 TestNetworkPlugins/group/kindnet/NetCatPod 15.32
291 TestNetworkPlugins/group/kindnet/DNS 0.25
292 TestNetworkPlugins/group/kindnet/Localhost 0.19
293 TestNetworkPlugins/group/kindnet/HairPin 0.18
294 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.24
295 TestNetworkPlugins/group/false/Start 76.63
296 TestNetworkPlugins/group/enable-default-cni/Start 97.79
297 TestNetworkPlugins/group/calico/ControllerPod 5.04
298 TestNetworkPlugins/group/calico/KubeletFlags 0.23
299 TestNetworkPlugins/group/calico/NetCatPod 13.44
300 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.25
301 TestNetworkPlugins/group/custom-flannel/NetCatPod 12.48
302 TestNetworkPlugins/group/calico/DNS 0.25
303 TestNetworkPlugins/group/calico/Localhost 0.2
304 TestNetworkPlugins/group/calico/HairPin 0.23
305 TestNetworkPlugins/group/custom-flannel/DNS 0.26
306 TestNetworkPlugins/group/custom-flannel/Localhost 0.18
307 TestNetworkPlugins/group/custom-flannel/HairPin 0.21
308 TestNetworkPlugins/group/flannel/Start 96.55
309 TestNetworkPlugins/group/bridge/Start 116.99
310 TestNetworkPlugins/group/false/KubeletFlags 0.25
311 TestNetworkPlugins/group/false/NetCatPod 11.41
312 TestNetworkPlugins/group/false/DNS 0.21
313 TestNetworkPlugins/group/false/Localhost 0.17
314 TestNetworkPlugins/group/false/HairPin 0.16
315 TestNetworkPlugins/group/kubenet/Start 99.85
316 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.23
317 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.37
318 TestNetworkPlugins/group/enable-default-cni/DNS 0.2
319 TestNetworkPlugins/group/enable-default-cni/Localhost 0.17
320 TestNetworkPlugins/group/enable-default-cni/HairPin 0.17
322 TestStartStop/group/old-k8s-version/serial/FirstStart 166.41
323 TestNetworkPlugins/group/flannel/ControllerPod 5.02
324 TestNetworkPlugins/group/flannel/KubeletFlags 0.27
325 TestNetworkPlugins/group/flannel/NetCatPod 14.46
326 TestNetworkPlugins/group/flannel/DNS 0.21
327 TestNetworkPlugins/group/flannel/Localhost 0.2
328 TestNetworkPlugins/group/flannel/HairPin 0.18
329 TestNetworkPlugins/group/bridge/KubeletFlags 0.25
330 TestNetworkPlugins/group/bridge/NetCatPod 12.41
331 TestNetworkPlugins/group/bridge/DNS 0.25
332 TestNetworkPlugins/group/bridge/Localhost 0.22
333 TestNetworkPlugins/group/bridge/HairPin 0.28
335 TestStartStop/group/no-preload/serial/FirstStart 95.44
337 TestStartStop/group/embed-certs/serial/FirstStart 99.74
338 TestNetworkPlugins/group/kubenet/KubeletFlags 0.25
339 TestNetworkPlugins/group/kubenet/NetCatPod 12.37
340 TestNetworkPlugins/group/kubenet/DNS 0.18
341 TestNetworkPlugins/group/kubenet/Localhost 0.15
342 TestNetworkPlugins/group/kubenet/HairPin 0.16
344 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 92.07
345 TestStartStop/group/no-preload/serial/DeployApp 9.53
346 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.31
347 TestStartStop/group/no-preload/serial/Stop 13.17
348 TestStartStop/group/embed-certs/serial/DeployApp 10.52
349 TestStartStop/group/old-k8s-version/serial/DeployApp 9.54
350 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.26
351 TestStartStop/group/no-preload/serial/SecondStart 335.88
352 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.92
353 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.39
354 TestStartStop/group/old-k8s-version/serial/Stop 13.15
355 TestStartStop/group/embed-certs/serial/Stop 13.22
356 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.24
357 TestStartStop/group/old-k8s-version/serial/SecondStart 459.12
358 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.24
359 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.51
361 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.17
362 TestStartStop/group/default-k8s-diff-port/serial/Stop 13.13
363 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.22
364 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 338.66
370 TestStartStop/group/newest-cni/serial/FirstStart 86.6
371 TestStartStop/group/newest-cni/serial/DeployApp 0
372 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.07
373 TestStartStop/group/newest-cni/serial/Stop 8.13
374 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.22
375 TestStartStop/group/newest-cni/serial/SecondStart 49.16
376 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
377 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
378 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.28
379 TestStartStop/group/newest-cni/serial/Pause 2.49
380 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 16.03
381 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
382 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.28
383 TestStartStop/group/no-preload/serial/Pause 2.55
384 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 5.03
385 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.09
386 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.28
387 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.61
388 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.02
389 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.08
391 TestStartStop/group/old-k8s-version/serial/Pause 2.42
x
+
TestDownloadOnly/v1.16.0/json-events (5.68s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-834780 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=kvm2 
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-834780 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=kvm2 : (5.682947522s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (5.68s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-834780
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-834780: exit status 85 (73.211693ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-834780 | jenkins | v1.32.0 | 27 Nov 23 10:55 UTC |          |
	|         | -p download-only-834780        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/27 10:55:18
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.21.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1127 10:55:18.029723  129665 out.go:296] Setting OutFile to fd 1 ...
	I1127 10:55:18.029870  129665 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1127 10:55:18.029882  129665 out.go:309] Setting ErrFile to fd 2...
	I1127 10:55:18.029889  129665 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1127 10:55:18.030067  129665 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17644-122411/.minikube/bin
	W1127 10:55:18.030187  129665 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17644-122411/.minikube/config/config.json: open /home/jenkins/minikube-integration/17644-122411/.minikube/config/config.json: no such file or directory
	I1127 10:55:18.030759  129665 out.go:303] Setting JSON to true
	I1127 10:55:18.031792  129665 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":2269,"bootTime":1701080249,"procs":420,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1127 10:55:18.031856  129665 start.go:138] virtualization: kvm guest
	I1127 10:55:18.034397  129665 out.go:97] [download-only-834780] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1127 10:55:18.035951  129665 out.go:169] MINIKUBE_LOCATION=17644
	W1127 10:55:18.034501  129665 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/17644-122411/.minikube/cache/preloaded-tarball: no such file or directory
	I1127 10:55:18.034551  129665 notify.go:220] Checking for updates...
	I1127 10:55:18.038827  129665 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1127 10:55:18.040289  129665 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17644-122411/kubeconfig
	I1127 10:55:18.041702  129665 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17644-122411/.minikube
	I1127 10:55:18.043014  129665 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1127 10:55:18.045534  129665 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1127 10:55:18.045750  129665 driver.go:378] Setting default libvirt URI to qemu:///system
	I1127 10:55:18.143907  129665 out.go:97] Using the kvm2 driver based on user configuration
	I1127 10:55:18.143940  129665 start.go:298] selected driver: kvm2
	I1127 10:55:18.143946  129665 start.go:902] validating driver "kvm2" against <nil>
	I1127 10:55:18.144258  129665 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1127 10:55:18.144360  129665 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17644-122411/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1127 10:55:18.159798  129665 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1127 10:55:18.159876  129665 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1127 10:55:18.160346  129665 start_flags.go:394] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I1127 10:55:18.160490  129665 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I1127 10:55:18.160546  129665 cni.go:84] Creating CNI manager for ""
	I1127 10:55:18.160563  129665 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1127 10:55:18.160575  129665 start_flags.go:323] config:
	{Name:download-only-834780 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-834780 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunt
ime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1127 10:55:18.160781  129665 iso.go:125] acquiring lock: {Name:mk7a2a8e57d33d30020383e75b407d4341747681 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1127 10:55:18.162858  129665 out.go:97] Downloading VM boot image ...
	I1127 10:55:18.162900  129665 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/17634/minikube-v1.32.1-1700142131-17634-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/17634/minikube-v1.32.1-1700142131-17634-amd64.iso.sha256 -> /home/jenkins/minikube-integration/17644-122411/.minikube/cache/iso/amd64/minikube-v1.32.1-1700142131-17634-amd64.iso
	I1127 10:55:20.143201  129665 out.go:97] Starting control plane node download-only-834780 in cluster download-only-834780
	I1127 10:55:20.143239  129665 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1127 10:55:20.168353  129665 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I1127 10:55:20.168396  129665 cache.go:56] Caching tarball of preloaded images
	I1127 10:55:20.168558  129665 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1127 10:55:20.170235  129665 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I1127 10:55:20.170252  129665 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I1127 10:55:20.196311  129665 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4?checksum=md5:326f3ce331abb64565b50b8c9e791244 -> /home/jenkins/minikube-integration/17644-122411/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I1127 10:55:22.348957  129665 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I1127 10:55:22.349045  129665 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17644-122411/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-834780"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/json-events (4.46s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-834780 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=docker --driver=kvm2 
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-834780 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=docker --driver=kvm2 : (4.456467505s)
--- PASS: TestDownloadOnly/v1.28.4/json-events (4.46s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/preload-exists
--- PASS: TestDownloadOnly/v1.28.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-834780
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-834780: exit status 85 (74.375223ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-834780 | jenkins | v1.32.0 | 27 Nov 23 10:55 UTC |          |
	|         | -p download-only-834780        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-834780 | jenkins | v1.32.0 | 27 Nov 23 10:55 UTC |          |
	|         | -p download-only-834780        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.4   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/27 10:55:23
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.21.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1127 10:55:23.787905  129712 out.go:296] Setting OutFile to fd 1 ...
	I1127 10:55:23.788142  129712 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1127 10:55:23.788150  129712 out.go:309] Setting ErrFile to fd 2...
	I1127 10:55:23.788155  129712 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1127 10:55:23.788323  129712 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17644-122411/.minikube/bin
	W1127 10:55:23.788443  129712 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17644-122411/.minikube/config/config.json: open /home/jenkins/minikube-integration/17644-122411/.minikube/config/config.json: no such file or directory
	I1127 10:55:23.788846  129712 out.go:303] Setting JSON to true
	I1127 10:55:23.789813  129712 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":2275,"bootTime":1701080249,"procs":416,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1127 10:55:23.789873  129712 start.go:138] virtualization: kvm guest
	I1127 10:55:23.792100  129712 out.go:97] [download-only-834780] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1127 10:55:23.793646  129712 out.go:169] MINIKUBE_LOCATION=17644
	I1127 10:55:23.792247  129712 notify.go:220] Checking for updates...
	I1127 10:55:23.796380  129712 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1127 10:55:23.797778  129712 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17644-122411/kubeconfig
	I1127 10:55:23.799008  129712 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17644-122411/.minikube
	I1127 10:55:23.800342  129712 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-834780"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.4/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:190: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:202: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-834780
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.58s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:307: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-379740 --alsologtostderr --binary-mirror http://127.0.0.1:32867 --driver=kvm2 
helpers_test.go:175: Cleaning up "binary-mirror-379740" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-379740
--- PASS: TestBinaryMirror (0.58s)

                                                
                                    
x
+
TestOffline (96.8s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-docker-473725 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2 
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-docker-473725 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2 : (1m35.768984769s)
helpers_test.go:175: Cleaning up "offline-docker-473725" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-docker-473725
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-docker-473725: (1.02857824s)
--- PASS: TestOffline (96.80s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:927: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-097795
addons_test.go:927: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-097795: exit status 85 (61.707839ms)

                                                
                                                
-- stdout --
	* Profile "addons-097795" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-097795"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:938: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-097795
addons_test.go:938: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-097795: exit status 85 (62.305399ms)

                                                
                                                
-- stdout --
	* Profile "addons-097795" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-097795"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (154.73s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-amd64 start -p addons-097795 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --driver=kvm2  --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-linux-amd64 start -p addons-097795 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --driver=kvm2  --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m34.731371095s)
--- PASS: TestAddons/Setup (154.73s)

                                                
                                    
x
+
TestAddons/parallel/Registry (17.65s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:329: registry stabilized in 21.590369ms
addons_test.go:331: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-9xz2l" [4c8afff9-ecf4-4774-8975-5e4f5e1fe0c0] Running
addons_test.go:331: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.045146943s
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-ndgvb" [6829407f-9cee-4651-b6d6-bc670a5919eb] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.016124555s
addons_test.go:339: (dbg) Run:  kubectl --context addons-097795 delete po -l run=registry-test --now
addons_test.go:344: (dbg) Run:  kubectl --context addons-097795 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:344: (dbg) Done: kubectl --context addons-097795 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (6.617194293s)
addons_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p addons-097795 ip
addons_test.go:387: (dbg) Run:  out/minikube-linux-amd64 -p addons-097795 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (17.65s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (22.88s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:206: (dbg) Run:  kubectl --context addons-097795 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:231: (dbg) Run:  kubectl --context addons-097795 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:244: (dbg) Run:  kubectl --context addons-097795 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:249: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [d60ddb9d-8b1e-4338-a8af-b0794e1ea1e6] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [d60ddb9d-8b1e-4338-a8af-b0794e1ea1e6] Running
addons_test.go:249: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 12.031785401s
addons_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p addons-097795 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:285: (dbg) Run:  kubectl --context addons-097795 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p addons-097795 ip
addons_test.go:296: (dbg) Run:  nslookup hello-john.test 192.168.39.71
addons_test.go:305: (dbg) Run:  out/minikube-linux-amd64 -p addons-097795 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:305: (dbg) Done: out/minikube-linux-amd64 -p addons-097795 addons disable ingress-dns --alsologtostderr -v=1: (1.132560437s)
addons_test.go:310: (dbg) Run:  out/minikube-linux-amd64 -p addons-097795 addons disable ingress --alsologtostderr -v=1
addons_test.go:310: (dbg) Done: out/minikube-linux-amd64 -p addons-097795 addons disable ingress --alsologtostderr -v=1: (7.685996025s)
--- PASS: TestAddons/parallel/Ingress (22.88s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.92s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:837: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-6lgbr" [148dc6eb-6c56-4f3b-864c-41aec645c14d] Running
addons_test.go:837: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.012458892s
addons_test.go:840: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-097795
addons_test.go:840: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-097795: (5.904432896s)
--- PASS: TestAddons/parallel/InspektorGadget (10.92s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.75s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:406: metrics-server stabilized in 21.864632ms
addons_test.go:408: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-lc4pv" [311a895c-be59-4979-bc9e-7619d98d8642] Running
addons_test.go:408: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.046499979s
addons_test.go:414: (dbg) Run:  kubectl --context addons-097795 top pods -n kube-system
addons_test.go:431: (dbg) Run:  out/minikube-linux-amd64 -p addons-097795 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.75s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (11.47s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:455: tiller-deploy stabilized in 3.654512ms
addons_test.go:457: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-mjml7" [a4d2c94f-7b41-4845-9194-bd4ee2f7b092] Running
addons_test.go:457: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.025079235s
addons_test.go:472: (dbg) Run:  kubectl --context addons-097795 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:472: (dbg) Done: kubectl --context addons-097795 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (5.891938176s)
addons_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p addons-097795 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (11.47s)

                                                
                                    
x
+
TestAddons/parallel/CSI (104.97s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:560: csi-hostpath-driver pods stabilized in 22.360885ms
addons_test.go:563: (dbg) Run:  kubectl --context addons-097795 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:568: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097795 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097795 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097795 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097795 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097795 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097795 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097795 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097795 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097795 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097795 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097795 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097795 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097795 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097795 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097795 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097795 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097795 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097795 get pvc hpvc -o jsonpath={.status.phase} -n default
2023/11/27 10:58:20 [DEBUG] GET http://192.168.39.71:5000
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097795 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097795 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097795 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097795 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097795 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097795 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097795 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097795 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097795 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097795 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097795 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097795 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097795 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097795 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097795 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097795 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097795 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097795 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097795 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097795 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097795 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097795 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097795 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097795 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097795 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097795 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097795 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097795 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097795 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097795 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097795 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097795 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097795 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097795 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097795 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097795 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097795 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097795 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097795 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097795 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097795 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097795 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097795 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:573: (dbg) Run:  kubectl --context addons-097795 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:578: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [4e06189d-e0e0-40aa-bcaa-2b054d9e1069] Pending
helpers_test.go:344: "task-pv-pod" [4e06189d-e0e0-40aa-bcaa-2b054d9e1069] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [4e06189d-e0e0-40aa-bcaa-2b054d9e1069] Running
addons_test.go:578: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 16.024704821s
addons_test.go:583: (dbg) Run:  kubectl --context addons-097795 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:588: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-097795 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-097795 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-097795 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:593: (dbg) Run:  kubectl --context addons-097795 delete pod task-pv-pod
addons_test.go:599: (dbg) Run:  kubectl --context addons-097795 delete pvc hpvc
addons_test.go:605: (dbg) Run:  kubectl --context addons-097795 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:610: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097795 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097795 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097795 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097795 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097795 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097795 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097795 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097795 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097795 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097795 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:615: (dbg) Run:  kubectl --context addons-097795 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:620: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [fec56b05-25e7-4bf6-b4ee-06bbc82d5ffc] Pending
helpers_test.go:344: "task-pv-pod-restore" [fec56b05-25e7-4bf6-b4ee-06bbc82d5ffc] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [fec56b05-25e7-4bf6-b4ee-06bbc82d5ffc] Running
addons_test.go:620: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 9.022038347s
addons_test.go:625: (dbg) Run:  kubectl --context addons-097795 delete pod task-pv-pod-restore
addons_test.go:625: (dbg) Done: kubectl --context addons-097795 delete pod task-pv-pod-restore: (1.142864571s)
addons_test.go:629: (dbg) Run:  kubectl --context addons-097795 delete pvc hpvc-restore
addons_test.go:633: (dbg) Run:  kubectl --context addons-097795 delete volumesnapshot new-snapshot-demo
addons_test.go:637: (dbg) Run:  out/minikube-linux-amd64 -p addons-097795 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:637: (dbg) Done: out/minikube-linux-amd64 -p addons-097795 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.714028826s)
addons_test.go:641: (dbg) Run:  out/minikube-linux-amd64 -p addons-097795 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (104.97s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (16.94s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:823: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-097795 --alsologtostderr -v=1
addons_test.go:823: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-097795 --alsologtostderr -v=1: (1.923885173s)
addons_test.go:828: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-777fd4b855-n92s2" [9e3a58fe-cfc3-46c7-abd2-e7317bdf357d] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-777fd4b855-n92s2" [9e3a58fe-cfc3-46c7-abd2-e7317bdf357d] Running
addons_test.go:828: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 15.012087808s
--- PASS: TestAddons/parallel/Headlamp (16.94s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.73s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:856: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5649c69bf6-h4cr4" [83368ac1-e066-4caf-8b68-e5e2a66e79ea] Running
addons_test.go:856: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.010216513s
addons_test.go:859: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-097795
--- PASS: TestAddons/parallel/CloudSpanner (5.73s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (11.92s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:872: (dbg) Run:  kubectl --context addons-097795 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:878: (dbg) Run:  kubectl --context addons-097795 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:882: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097795 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097795 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097795 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097795 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097795 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097795 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097795 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:885: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [3bd9b1c5-2929-4ef8-94ba-ad7fc650b0d3] Pending
helpers_test.go:344: "test-local-path" [3bd9b1c5-2929-4ef8-94ba-ad7fc650b0d3] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [3bd9b1c5-2929-4ef8-94ba-ad7fc650b0d3] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [3bd9b1c5-2929-4ef8-94ba-ad7fc650b0d3] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:885: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.020347781s
addons_test.go:890: (dbg) Run:  kubectl --context addons-097795 get pvc test-pvc -o=json
addons_test.go:899: (dbg) Run:  out/minikube-linux-amd64 -p addons-097795 ssh "cat /opt/local-path-provisioner/pvc-99517928-6fbc-4347-be15-58bcd73f3007_default_test-pvc/file1"
addons_test.go:911: (dbg) Run:  kubectl --context addons-097795 delete pod test-local-path
addons_test.go:915: (dbg) Run:  kubectl --context addons-097795 delete pvc test-pvc
addons_test.go:919: (dbg) Run:  out/minikube-linux-amd64 -p addons-097795 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (11.92s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.47s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:951: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-s74kb" [f3a6d526-d8b4-45b3-8f01-12e3fee9e2b7] Running
addons_test.go:951: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.017719634s
addons_test.go:954: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-097795
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.47s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:649: (dbg) Run:  kubectl --context addons-097795 create ns new-namespace
addons_test.go:663: (dbg) Run:  kubectl --context addons-097795 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (13.41s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:171: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-097795
addons_test.go:171: (dbg) Done: out/minikube-linux-amd64 stop -p addons-097795: (13.107230581s)
addons_test.go:175: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-097795
addons_test.go:179: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-097795
addons_test.go:184: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-097795
--- PASS: TestAddons/StoppedEnableDisable (13.41s)

                                                
                                    
x
+
TestCertOptions (106s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-413251 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2 
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-413251 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2 : (1m44.357680705s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-413251 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-413251 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-413251 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-413251" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-413251
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-413251: (1.085524452s)
--- PASS: TestCertOptions (106.00s)

                                                
                                    
x
+
TestCertExpiration (335s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-384160 --memory=2048 --cert-expiration=3m --driver=kvm2 
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-384160 --memory=2048 --cert-expiration=3m --driver=kvm2 : (2m3.217280259s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-384160 --memory=2048 --cert-expiration=8760h --driver=kvm2 
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-384160 --memory=2048 --cert-expiration=8760h --driver=kvm2 : (30.308466248s)
helpers_test.go:175: Cleaning up "cert-expiration-384160" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-384160
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-384160: (1.478371783s)
--- PASS: TestCertExpiration (335.00s)

                                                
                                    
x
+
TestDockerFlags (85.99s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-linux-amd64 start -p docker-flags-362668 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=kvm2 
docker_test.go:51: (dbg) Done: out/minikube-linux-amd64 start -p docker-flags-362668 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=kvm2 : (1m24.457589583s)
docker_test.go:56: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-362668 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-362668 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-362668" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-flags-362668
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-flags-362668: (1.054164993s)
--- PASS: TestDockerFlags (85.99s)

                                                
                                    
x
+
TestForceSystemdFlag (51.73s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-613645 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2 
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-613645 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2 : (50.488301554s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-613645 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-613645" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-613645
--- PASS: TestForceSystemdFlag (51.73s)

                                                
                                    
x
+
TestForceSystemdEnv (133.58s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-256632 --memory=2048 --alsologtostderr -v=5 --driver=kvm2 
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-256632 --memory=2048 --alsologtostderr -v=5 --driver=kvm2 : (2m12.228458253s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-256632 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-256632" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-256632
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-256632: (1.062470066s)
--- PASS: TestForceSystemdEnv (133.58s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (2.92s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (2.92s)

                                                
                                    
x
+
TestErrorSpam/setup (52.28s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-104710 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-104710 --driver=kvm2 
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-104710 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-104710 --driver=kvm2 : (52.278936446s)
--- PASS: TestErrorSpam/setup (52.28s)

                                                
                                    
x
+
TestErrorSpam/start (0.39s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-104710 --log_dir /tmp/nospam-104710 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-104710 --log_dir /tmp/nospam-104710 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-104710 --log_dir /tmp/nospam-104710 start --dry-run
--- PASS: TestErrorSpam/start (0.39s)

                                                
                                    
x
+
TestErrorSpam/status (0.86s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-104710 --log_dir /tmp/nospam-104710 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-104710 --log_dir /tmp/nospam-104710 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-104710 --log_dir /tmp/nospam-104710 status
--- PASS: TestErrorSpam/status (0.86s)

                                                
                                    
x
+
TestErrorSpam/pause (1.25s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-104710 --log_dir /tmp/nospam-104710 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-104710 --log_dir /tmp/nospam-104710 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-104710 --log_dir /tmp/nospam-104710 pause
--- PASS: TestErrorSpam/pause (1.25s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.39s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-104710 --log_dir /tmp/nospam-104710 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-104710 --log_dir /tmp/nospam-104710 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-104710 --log_dir /tmp/nospam-104710 unpause
--- PASS: TestErrorSpam/unpause (1.39s)

                                                
                                    
x
+
TestErrorSpam/stop (13.28s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-104710 --log_dir /tmp/nospam-104710 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-104710 --log_dir /tmp/nospam-104710 stop: (13.108890869s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-104710 --log_dir /tmp/nospam-104710 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-104710 --log_dir /tmp/nospam-104710 stop
--- PASS: TestErrorSpam/stop (13.28s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/17644-122411/.minikube/files/etc/test/nested/copy/129653/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (71.22s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-397013 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2 
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-397013 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2 : (1m11.221860079s)
--- PASS: TestFunctional/serial/StartWithProxy (71.22s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (37.51s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-397013 --alsologtostderr -v=8
E1127 11:03:04.045533  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/addons-097795/client.crt: no such file or directory
E1127 11:03:04.051333  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/addons-097795/client.crt: no such file or directory
E1127 11:03:04.061560  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/addons-097795/client.crt: no such file or directory
E1127 11:03:04.081842  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/addons-097795/client.crt: no such file or directory
E1127 11:03:04.122113  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/addons-097795/client.crt: no such file or directory
E1127 11:03:04.202504  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/addons-097795/client.crt: no such file or directory
E1127 11:03:04.362959  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/addons-097795/client.crt: no such file or directory
E1127 11:03:04.683436  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/addons-097795/client.crt: no such file or directory
E1127 11:03:05.324433  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/addons-097795/client.crt: no such file or directory
E1127 11:03:06.604696  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/addons-097795/client.crt: no such file or directory
E1127 11:03:09.164852  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/addons-097795/client.crt: no such file or directory
E1127 11:03:14.285111  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/addons-097795/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-397013 --alsologtostderr -v=8: (37.509978307s)
functional_test.go:659: soft start took 37.510614677s for "functional-397013" cluster.
--- PASS: TestFunctional/serial/SoftStart (37.51s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-397013 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.45s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-397013 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-397013 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-397013 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.45s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.37s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-397013 /tmp/TestFunctionalserialCacheCmdcacheadd_local4283333557/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-397013 cache add minikube-local-cache-test:functional-397013
functional_test.go:1085: (dbg) Done: out/minikube-linux-amd64 -p functional-397013 cache add minikube-local-cache-test:functional-397013: (1.046775535s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-397013 cache delete minikube-local-cache-test:functional-397013
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-397013
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.37s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.27s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-397013 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.27s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.27s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-397013 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-397013 ssh sudo crictl inspecti registry.k8s.io/pause:latest
E1127 11:03:24.525913  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/addons-097795/client.crt: no such file or directory
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-397013 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (242.178158ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-397013 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-397013 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.27s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-397013 kubectl -- --context functional-397013 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-397013 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (40.36s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-397013 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1127 11:03:45.006923  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/addons-097795/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-397013 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (40.362128711s)
functional_test.go:757: restart took 40.362265057s for "functional-397013" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (40.36s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-397013 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.07s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-397013 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-397013 logs: (1.068471388s)
--- PASS: TestFunctional/serial/LogsCmd (1.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.11s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-397013 logs --file /tmp/TestFunctionalserialLogsFileCmd3035755138/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-397013 logs --file /tmp/TestFunctionalserialLogsFileCmd3035755138/001/logs.txt: (1.106595082s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.11s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.4s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-397013 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-397013
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-397013: exit status 115 (286.920458ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.107:31244 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-397013 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.40s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-397013 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-397013 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-397013 config get cpus: exit status 14 (78.644014ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-397013 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-397013 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-397013 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-397013 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-397013 config get cpus: exit status 14 (64.535453ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (21.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-397013 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-397013 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 137119: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (21.89s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-397013 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-397013 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 : exit status 23 (157.673337ms)

                                                
                                                
-- stdout --
	* [functional-397013] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17644
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17644-122411/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17644-122411/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1127 11:04:42.355086  136569 out.go:296] Setting OutFile to fd 1 ...
	I1127 11:04:42.355768  136569 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1127 11:04:42.355784  136569 out.go:309] Setting ErrFile to fd 2...
	I1127 11:04:42.355792  136569 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1127 11:04:42.356102  136569 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17644-122411/.minikube/bin
	I1127 11:04:42.356877  136569 out.go:303] Setting JSON to false
	I1127 11:04:42.358151  136569 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":2834,"bootTime":1701080249,"procs":194,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1127 11:04:42.358235  136569 start.go:138] virtualization: kvm guest
	I1127 11:04:42.360240  136569 out.go:177] * [functional-397013] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1127 11:04:42.362268  136569 out.go:177]   - MINIKUBE_LOCATION=17644
	I1127 11:04:42.362312  136569 notify.go:220] Checking for updates...
	I1127 11:04:42.363566  136569 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1127 11:04:42.365050  136569 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17644-122411/kubeconfig
	I1127 11:04:42.366715  136569 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17644-122411/.minikube
	I1127 11:04:42.368159  136569 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1127 11:04:42.369767  136569 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1127 11:04:42.371749  136569 config.go:182] Loaded profile config "functional-397013": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1127 11:04:42.372169  136569 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1127 11:04:42.372258  136569 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 11:04:42.387926  136569 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44173
	I1127 11:04:42.388401  136569 main.go:141] libmachine: () Calling .GetVersion
	I1127 11:04:42.388960  136569 main.go:141] libmachine: Using API Version  1
	I1127 11:04:42.388997  136569 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 11:04:42.389436  136569 main.go:141] libmachine: () Calling .GetMachineName
	I1127 11:04:42.389655  136569 main.go:141] libmachine: (functional-397013) Calling .DriverName
	I1127 11:04:42.389930  136569 driver.go:378] Setting default libvirt URI to qemu:///system
	I1127 11:04:42.390271  136569 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1127 11:04:42.390309  136569 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 11:04:42.404633  136569 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40297
	I1127 11:04:42.405045  136569 main.go:141] libmachine: () Calling .GetVersion
	I1127 11:04:42.405507  136569 main.go:141] libmachine: Using API Version  1
	I1127 11:04:42.405532  136569 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 11:04:42.405846  136569 main.go:141] libmachine: () Calling .GetMachineName
	I1127 11:04:42.406025  136569 main.go:141] libmachine: (functional-397013) Calling .DriverName
	I1127 11:04:42.439895  136569 out.go:177] * Using the kvm2 driver based on existing profile
	I1127 11:04:42.441196  136569 start.go:298] selected driver: kvm2
	I1127 11:04:42.441213  136569 start.go:902] validating driver "kvm2" against &{Name:functional-397013 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17634/minikube-v1.32.1-1700142131-17634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:functional-397013 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.39.107 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 C
ertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1127 11:04:42.441380  136569 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1127 11:04:42.443703  136569 out.go:177] 
	W1127 11:04:42.445454  136569 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1127 11:04:42.447490  136569 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-397013 --dry-run --alsologtostderr -v=1 --driver=kvm2 
--- PASS: TestFunctional/parallel/DryRun (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-397013 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-397013 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 : exit status 23 (153.986414ms)

                                                
                                                
-- stdout --
	* [functional-397013] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17644
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17644-122411/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17644-122411/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1127 11:04:42.660335  136623 out.go:296] Setting OutFile to fd 1 ...
	I1127 11:04:42.660480  136623 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1127 11:04:42.660490  136623 out.go:309] Setting ErrFile to fd 2...
	I1127 11:04:42.660524  136623 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1127 11:04:42.660863  136623 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17644-122411/.minikube/bin
	I1127 11:04:42.661384  136623 out.go:303] Setting JSON to false
	I1127 11:04:42.662230  136623 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":2834,"bootTime":1701080249,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1127 11:04:42.662289  136623 start.go:138] virtualization: kvm guest
	I1127 11:04:42.664580  136623 out.go:177] * [functional-397013] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	I1127 11:04:42.666519  136623 out.go:177]   - MINIKUBE_LOCATION=17644
	I1127 11:04:42.668327  136623 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1127 11:04:42.666528  136623 notify.go:220] Checking for updates...
	I1127 11:04:42.670101  136623 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17644-122411/kubeconfig
	I1127 11:04:42.671566  136623 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17644-122411/.minikube
	I1127 11:04:42.673134  136623 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1127 11:04:42.674700  136623 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1127 11:04:42.676283  136623 config.go:182] Loaded profile config "functional-397013": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1127 11:04:42.676732  136623 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1127 11:04:42.676802  136623 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 11:04:42.692016  136623 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35037
	I1127 11:04:42.692469  136623 main.go:141] libmachine: () Calling .GetVersion
	I1127 11:04:42.693190  136623 main.go:141] libmachine: Using API Version  1
	I1127 11:04:42.693226  136623 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 11:04:42.693652  136623 main.go:141] libmachine: () Calling .GetMachineName
	I1127 11:04:42.693837  136623 main.go:141] libmachine: (functional-397013) Calling .DriverName
	I1127 11:04:42.694106  136623 driver.go:378] Setting default libvirt URI to qemu:///system
	I1127 11:04:42.694435  136623 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1127 11:04:42.694483  136623 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 11:04:42.709265  136623 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37751
	I1127 11:04:42.709722  136623 main.go:141] libmachine: () Calling .GetVersion
	I1127 11:04:42.710333  136623 main.go:141] libmachine: Using API Version  1
	I1127 11:04:42.710361  136623 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 11:04:42.710673  136623 main.go:141] libmachine: () Calling .GetMachineName
	I1127 11:04:42.710908  136623 main.go:141] libmachine: (functional-397013) Calling .DriverName
	I1127 11:04:42.744167  136623 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I1127 11:04:42.745765  136623 start.go:298] selected driver: kvm2
	I1127 11:04:42.745782  136623 start.go:902] validating driver "kvm2" against &{Name:functional-397013 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17634/minikube-v1.32.1-1700142131-17634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:functional-397013 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.39.107 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 C
ertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1127 11:04:42.745890  136623 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1127 11:04:42.748058  136623 out.go:177] 
	W1127 11:04:42.749640  136623 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1127 11:04:42.750963  136623 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-397013 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-397013 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-397013 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.07s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1628: (dbg) Run:  kubectl --context functional-397013 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-397013 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-fn9ch" [3435ba28-b168-4874-96da-2ab700334904] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-fn9ch" [3435ba28-b168-4874-96da-2ab700334904] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.018500693s
functional_test.go:1648: (dbg) Run:  out/minikube-linux-amd64 -p functional-397013 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.39.107:30146
functional_test.go:1674: http://192.168.39.107:30146: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-55497b8b78-fn9ch

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.107:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.107:30146
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.76s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-linux-amd64 -p functional-397013 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-linux-amd64 -p functional-397013 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (44.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [cebc52e8-8985-4f23-b9ed-657e50e7e143] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.012280782s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-397013 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-397013 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-397013 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-397013 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-397013 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [bd98be0b-688d-40a4-a776-c487952fd0d5] Pending
helpers_test.go:344: "sp-pod" [bd98be0b-688d-40a4-a776-c487952fd0d5] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [bd98be0b-688d-40a4-a776-c487952fd0d5] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 23.026371863s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-397013 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-397013 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-397013 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [28367c90-e5a9-4354-86c3-8eee010b7374] Pending
helpers_test.go:344: "sp-pod" [28367c90-e5a9-4354-86c3-8eee010b7374] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [28367c90-e5a9-4354-86c3-8eee010b7374] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.015152174s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-397013 exec sp-pod -- ls /tmp/mount
2023/11/27 11:05:12 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (44.37s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-linux-amd64 -p functional-397013 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-linux-amd64 -p functional-397013 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-397013 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-397013 ssh -n functional-397013 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-397013 cp functional-397013:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3456294228/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-397013 ssh -n functional-397013 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.01s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (39.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-397013 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-bn9d8" [488f3bff-defb-4ccf-8737-1dd7b40828c8] Pending
helpers_test.go:344: "mysql-859648c796-bn9d8" [488f3bff-defb-4ccf-8737-1dd7b40828c8] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-bn9d8" [488f3bff-defb-4ccf-8737-1dd7b40828c8] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 33.043882976s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-397013 exec mysql-859648c796-bn9d8 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-397013 exec mysql-859648c796-bn9d8 -- mysql -ppassword -e "show databases;": exit status 1 (237.853068ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-397013 exec mysql-859648c796-bn9d8 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-397013 exec mysql-859648c796-bn9d8 -- mysql -ppassword -e "show databases;": exit status 1 (179.016694ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-397013 exec mysql-859648c796-bn9d8 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-397013 exec mysql-859648c796-bn9d8 -- mysql -ppassword -e "show databases;": exit status 1 (291.071671ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-397013 exec mysql-859648c796-bn9d8 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (39.13s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/129653/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-397013 ssh "sudo cat /etc/test/nested/copy/129653/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/129653.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-397013 ssh "sudo cat /etc/ssl/certs/129653.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/129653.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-397013 ssh "sudo cat /usr/share/ca-certificates/129653.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-397013 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/1296532.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-397013 ssh "sudo cat /etc/ssl/certs/1296532.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/1296532.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-397013 ssh "sudo cat /usr/share/ca-certificates/1296532.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-397013 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.71s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-397013 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-397013 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-397013 ssh "sudo systemctl is-active crio": exit status 1 (272.143019ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-397013 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-397013 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.97s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-397013 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-397013 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.4
registry.k8s.io/kube-proxy:v1.28.4
registry.k8s.io/kube-controller-manager:v1.28.4
registry.k8s.io/kube-apiserver:v1.28.4
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-397013
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-397013
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-397013 image ls --format short --alsologtostderr:
I1127 11:04:55.712648  137276 out.go:296] Setting OutFile to fd 1 ...
I1127 11:04:55.712894  137276 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1127 11:04:55.712905  137276 out.go:309] Setting ErrFile to fd 2...
I1127 11:04:55.712909  137276 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1127 11:04:55.713108  137276 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17644-122411/.minikube/bin
I1127 11:04:55.713662  137276 config.go:182] Loaded profile config "functional-397013": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1127 11:04:55.713753  137276 config.go:182] Loaded profile config "functional-397013": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1127 11:04:55.714118  137276 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1127 11:04:55.714164  137276 main.go:141] libmachine: Launching plugin server for driver kvm2
I1127 11:04:55.728278  137276 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34243
I1127 11:04:55.728773  137276 main.go:141] libmachine: () Calling .GetVersion
I1127 11:04:55.729315  137276 main.go:141] libmachine: Using API Version  1
I1127 11:04:55.729339  137276 main.go:141] libmachine: () Calling .SetConfigRaw
I1127 11:04:55.729676  137276 main.go:141] libmachine: () Calling .GetMachineName
I1127 11:04:55.729853  137276 main.go:141] libmachine: (functional-397013) Calling .GetState
I1127 11:04:55.731476  137276 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1127 11:04:55.731523  137276 main.go:141] libmachine: Launching plugin server for driver kvm2
I1127 11:04:55.744963  137276 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33037
I1127 11:04:55.745329  137276 main.go:141] libmachine: () Calling .GetVersion
I1127 11:04:55.745753  137276 main.go:141] libmachine: Using API Version  1
I1127 11:04:55.745785  137276 main.go:141] libmachine: () Calling .SetConfigRaw
I1127 11:04:55.746080  137276 main.go:141] libmachine: () Calling .GetMachineName
I1127 11:04:55.746246  137276 main.go:141] libmachine: (functional-397013) Calling .DriverName
I1127 11:04:55.746475  137276 ssh_runner.go:195] Run: systemctl --version
I1127 11:04:55.746502  137276 main.go:141] libmachine: (functional-397013) Calling .GetSSHHostname
I1127 11:04:55.749175  137276 main.go:141] libmachine: (functional-397013) DBG | domain functional-397013 has defined MAC address 52:54:00:15:58:b3 in network mk-functional-397013
I1127 11:04:55.749630  137276 main.go:141] libmachine: (functional-397013) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:58:b3", ip: ""} in network mk-functional-397013: {Iface:virbr1 ExpiryTime:2023-11-27 12:01:47 +0000 UTC Type:0 Mac:52:54:00:15:58:b3 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:functional-397013 Clientid:01:52:54:00:15:58:b3}
I1127 11:04:55.749670  137276 main.go:141] libmachine: (functional-397013) DBG | domain functional-397013 has defined IP address 192.168.39.107 and MAC address 52:54:00:15:58:b3 in network mk-functional-397013
I1127 11:04:55.749773  137276 main.go:141] libmachine: (functional-397013) Calling .GetSSHPort
I1127 11:04:55.749955  137276 main.go:141] libmachine: (functional-397013) Calling .GetSSHKeyPath
I1127 11:04:55.750142  137276 main.go:141] libmachine: (functional-397013) Calling .GetSSHUsername
I1127 11:04:55.750298  137276 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17644-122411/.minikube/machines/functional-397013/id_rsa Username:docker}
I1127 11:04:55.830386  137276 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I1127 11:04:55.853764  137276 main.go:141] libmachine: Making call to close driver server
I1127 11:04:55.853782  137276 main.go:141] libmachine: (functional-397013) Calling .Close
I1127 11:04:55.854082  137276 main.go:141] libmachine: Successfully made call to close driver server
I1127 11:04:55.854142  137276 main.go:141] libmachine: Making call to close connection to plugin binary
I1127 11:04:55.854160  137276 main.go:141] libmachine: Making call to close driver server
I1127 11:04:55.854171  137276 main.go:141] libmachine: (functional-397013) Calling .Close
I1127 11:04:55.854086  137276 main.go:141] libmachine: (functional-397013) DBG | Closing plugin on server side
I1127 11:04:55.854402  137276 main.go:141] libmachine: Successfully made call to close driver server
I1127 11:04:55.854420  137276 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-397013 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-397013 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/kube-controller-manager     | v1.28.4           | d058aa5ab969c | 122MB  |
| registry.k8s.io/coredns/coredns             | v1.10.1           | ead0a4a53df89 | 53.6MB |
| docker.io/library/minikube-local-cache-test | functional-397013 | 082c0dc890f0e | 30B    |
| registry.k8s.io/kube-apiserver              | v1.28.4           | 7fe0e6f37db33 | 126MB  |
| registry.k8s.io/kube-scheduler              | v1.28.4           | e3db313c6dbc0 | 60.1MB |
| registry.k8s.io/pause                       | 3.9               | e6f1816883972 | 744kB  |
| gcr.io/k8s-minikube/busybox                 | latest            | beae173ccac6a | 1.24MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| docker.io/localhost/my-image                | functional-397013 | e62bd4c0222a1 | 1.24MB |
| registry.k8s.io/kube-proxy                  | v1.28.4           | 83f6cc407eed8 | 73.2MB |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| docker.io/library/nginx                     | latest            | a6bd71f48f683 | 187MB  |
| gcr.io/google-containers/addon-resizer      | functional-397013 | ffd4cfbbe753e | 32.9MB |
| registry.k8s.io/etcd                        | 3.5.9-0           | 73deb9a3f7025 | 294MB  |
| docker.io/kubernetesui/metrics-scraper      | <none>            | 115053965e86b | 43.8MB |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| docker.io/library/nginx                     | alpine            | b135667c98980 | 47.7MB |
| docker.io/library/mysql                     | 5.7               | bdba757bc9336 | 501MB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-397013 image ls --format table --alsologtostderr:
I1127 11:04:59.381771  137759 out.go:296] Setting OutFile to fd 1 ...
I1127 11:04:59.382045  137759 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1127 11:04:59.382055  137759 out.go:309] Setting ErrFile to fd 2...
I1127 11:04:59.382060  137759 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1127 11:04:59.382239  137759 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17644-122411/.minikube/bin
I1127 11:04:59.382843  137759 config.go:182] Loaded profile config "functional-397013": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1127 11:04:59.382962  137759 config.go:182] Loaded profile config "functional-397013": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1127 11:04:59.383487  137759 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1127 11:04:59.383544  137759 main.go:141] libmachine: Launching plugin server for driver kvm2
I1127 11:04:59.397934  137759 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44217
I1127 11:04:59.398376  137759 main.go:141] libmachine: () Calling .GetVersion
I1127 11:04:59.398988  137759 main.go:141] libmachine: Using API Version  1
I1127 11:04:59.399016  137759 main.go:141] libmachine: () Calling .SetConfigRaw
I1127 11:04:59.399365  137759 main.go:141] libmachine: () Calling .GetMachineName
I1127 11:04:59.399557  137759 main.go:141] libmachine: (functional-397013) Calling .GetState
I1127 11:04:59.401319  137759 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1127 11:04:59.401355  137759 main.go:141] libmachine: Launching plugin server for driver kvm2
I1127 11:04:59.415486  137759 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46513
I1127 11:04:59.415918  137759 main.go:141] libmachine: () Calling .GetVersion
I1127 11:04:59.416383  137759 main.go:141] libmachine: Using API Version  1
I1127 11:04:59.416417  137759 main.go:141] libmachine: () Calling .SetConfigRaw
I1127 11:04:59.416707  137759 main.go:141] libmachine: () Calling .GetMachineName
I1127 11:04:59.416921  137759 main.go:141] libmachine: (functional-397013) Calling .DriverName
I1127 11:04:59.417125  137759 ssh_runner.go:195] Run: systemctl --version
I1127 11:04:59.417148  137759 main.go:141] libmachine: (functional-397013) Calling .GetSSHHostname
I1127 11:04:59.419913  137759 main.go:141] libmachine: (functional-397013) DBG | domain functional-397013 has defined MAC address 52:54:00:15:58:b3 in network mk-functional-397013
I1127 11:04:59.420293  137759 main.go:141] libmachine: (functional-397013) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:58:b3", ip: ""} in network mk-functional-397013: {Iface:virbr1 ExpiryTime:2023-11-27 12:01:47 +0000 UTC Type:0 Mac:52:54:00:15:58:b3 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:functional-397013 Clientid:01:52:54:00:15:58:b3}
I1127 11:04:59.420322  137759 main.go:141] libmachine: (functional-397013) DBG | domain functional-397013 has defined IP address 192.168.39.107 and MAC address 52:54:00:15:58:b3 in network mk-functional-397013
I1127 11:04:59.420498  137759 main.go:141] libmachine: (functional-397013) Calling .GetSSHPort
I1127 11:04:59.420698  137759 main.go:141] libmachine: (functional-397013) Calling .GetSSHKeyPath
I1127 11:04:59.420886  137759 main.go:141] libmachine: (functional-397013) Calling .GetSSHUsername
I1127 11:04:59.421043  137759 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17644-122411/.minikube/machines/functional-397013/id_rsa Username:docker}
I1127 11:04:59.537666  137759 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I1127 11:04:59.577273  137759 main.go:141] libmachine: Making call to close driver server
I1127 11:04:59.577296  137759 main.go:141] libmachine: (functional-397013) Calling .Close
I1127 11:04:59.577908  137759 main.go:141] libmachine: Successfully made call to close driver server
I1127 11:04:59.577926  137759 main.go:141] libmachine: Making call to close connection to plugin binary
I1127 11:04:59.577936  137759 main.go:141] libmachine: Making call to close driver server
I1127 11:04:59.577946  137759 main.go:141] libmachine: (functional-397013) Calling .Close
I1127 11:04:59.578213  137759 main.go:141] libmachine: Successfully made call to close driver server
I1127 11:04:59.578243  137759 main.go:141] libmachine: Making call to close connection to plugin binary
I1127 11:04:59.578249  137759 main.go:141] libmachine: (functional-397013) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-397013 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-397013 image ls --format json --alsologtostderr:
[{"id":"082c0dc890f0ea2f75129425cba39751c1e55588dd48674d8bfafc9b26dc39ed","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-397013"],"size":"30"},{"id":"a6bd71f48f6839d9faae1f29d3babef831e76bc213107682c5cc80f0cbb30866","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"187000000"},{"id":"b135667c98980d3ca424a228cc4d2afdb287dc4e1a6a813a34b2e1705517488e","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"47700000"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"43800000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.4"],"size":"122000000"},{"id":"83f6cc407
eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.28.4"],"size":"73200000"},{"id":"e62bd4c0222a1428260bf20ec363c7b904e97b6c3a6fb99573386e1005d60503","repoDigests":[],"repoTags":["docker.io/localhost/my-image:functional-397013"],"size":"1240000"},{"id":"7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.4"],"size":"126000000"},{"id":"e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.4"],"size":"60100000"},{"id":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"53600000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6
bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"bdba757bc9336a536d6884ecfaef00d24c1da3becd41e094eb226076436f258c","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"501000000"},{"id":"73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"294000000"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"744000"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1240000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-397013"],"si
ze":"32900000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-397013 image ls --format json --alsologtostderr:
I1127 11:04:59.096067  137642 out.go:296] Setting OutFile to fd 1 ...
I1127 11:04:59.096198  137642 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1127 11:04:59.096207  137642 out.go:309] Setting ErrFile to fd 2...
I1127 11:04:59.096212  137642 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1127 11:04:59.096377  137642 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17644-122411/.minikube/bin
I1127 11:04:59.096956  137642 config.go:182] Loaded profile config "functional-397013": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1127 11:04:59.097052  137642 config.go:182] Loaded profile config "functional-397013": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1127 11:04:59.097384  137642 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1127 11:04:59.097442  137642 main.go:141] libmachine: Launching plugin server for driver kvm2
I1127 11:04:59.112133  137642 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44817
I1127 11:04:59.112730  137642 main.go:141] libmachine: () Calling .GetVersion
I1127 11:04:59.113419  137642 main.go:141] libmachine: Using API Version  1
I1127 11:04:59.113471  137642 main.go:141] libmachine: () Calling .SetConfigRaw
I1127 11:04:59.113907  137642 main.go:141] libmachine: () Calling .GetMachineName
I1127 11:04:59.114116  137642 main.go:141] libmachine: (functional-397013) Calling .GetState
I1127 11:04:59.116409  137642 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1127 11:04:59.116462  137642 main.go:141] libmachine: Launching plugin server for driver kvm2
I1127 11:04:59.138939  137642 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35727
I1127 11:04:59.139464  137642 main.go:141] libmachine: () Calling .GetVersion
I1127 11:04:59.140062  137642 main.go:141] libmachine: Using API Version  1
I1127 11:04:59.140082  137642 main.go:141] libmachine: () Calling .SetConfigRaw
I1127 11:04:59.140515  137642 main.go:141] libmachine: () Calling .GetMachineName
I1127 11:04:59.140711  137642 main.go:141] libmachine: (functional-397013) Calling .DriverName
I1127 11:04:59.140887  137642 ssh_runner.go:195] Run: systemctl --version
I1127 11:04:59.140910  137642 main.go:141] libmachine: (functional-397013) Calling .GetSSHHostname
I1127 11:04:59.143847  137642 main.go:141] libmachine: (functional-397013) DBG | domain functional-397013 has defined MAC address 52:54:00:15:58:b3 in network mk-functional-397013
I1127 11:04:59.144223  137642 main.go:141] libmachine: (functional-397013) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:58:b3", ip: ""} in network mk-functional-397013: {Iface:virbr1 ExpiryTime:2023-11-27 12:01:47 +0000 UTC Type:0 Mac:52:54:00:15:58:b3 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:functional-397013 Clientid:01:52:54:00:15:58:b3}
I1127 11:04:59.144254  137642 main.go:141] libmachine: (functional-397013) DBG | domain functional-397013 has defined IP address 192.168.39.107 and MAC address 52:54:00:15:58:b3 in network mk-functional-397013
I1127 11:04:59.144451  137642 main.go:141] libmachine: (functional-397013) Calling .GetSSHPort
I1127 11:04:59.144616  137642 main.go:141] libmachine: (functional-397013) Calling .GetSSHKeyPath
I1127 11:04:59.144729  137642 main.go:141] libmachine: (functional-397013) Calling .GetSSHUsername
I1127 11:04:59.144846  137642 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17644-122411/.minikube/machines/functional-397013/id_rsa Username:docker}
I1127 11:04:59.248509  137642 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I1127 11:04:59.312605  137642 main.go:141] libmachine: Making call to close driver server
I1127 11:04:59.312620  137642 main.go:141] libmachine: (functional-397013) Calling .Close
I1127 11:04:59.312845  137642 main.go:141] libmachine: Successfully made call to close driver server
I1127 11:04:59.312858  137642 main.go:141] libmachine: (functional-397013) DBG | Closing plugin on server side
I1127 11:04:59.312871  137642 main.go:141] libmachine: Making call to close connection to plugin binary
I1127 11:04:59.312887  137642 main.go:141] libmachine: Making call to close driver server
I1127 11:04:59.312900  137642 main.go:141] libmachine: (functional-397013) Calling .Close
I1127 11:04:59.313367  137642 main.go:141] libmachine: (functional-397013) DBG | Closing plugin on server side
I1127 11:04:59.313404  137642 main.go:141] libmachine: Successfully made call to close driver server
I1127 11:04:59.313454  137642 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-397013 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-397013 image ls --format yaml --alsologtostderr:
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: a6bd71f48f6839d9faae1f29d3babef831e76bc213107682c5cc80f0cbb30866
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "187000000"
- id: 83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.28.4
size: "73200000"
- id: bdba757bc9336a536d6884ecfaef00d24c1da3becd41e094eb226076436f258c
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "501000000"
- id: ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "53600000"
- id: 082c0dc890f0ea2f75129425cba39751c1e55588dd48674d8bfafc9b26dc39ed
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-397013
size: "30"
- id: 73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "294000000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.4
size: "126000000"
- id: e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.4
size: "60100000"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-397013
size: "32900000"
- id: d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.4
size: "122000000"
- id: b135667c98980d3ca424a228cc4d2afdb287dc4e1a6a813a34b2e1705517488e
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "47700000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-397013 image ls --format yaml --alsologtostderr:
I1127 11:04:55.914256  137300 out.go:296] Setting OutFile to fd 1 ...
I1127 11:04:55.914510  137300 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1127 11:04:55.914520  137300 out.go:309] Setting ErrFile to fd 2...
I1127 11:04:55.914525  137300 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1127 11:04:55.914771  137300 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17644-122411/.minikube/bin
I1127 11:04:55.915418  137300 config.go:182] Loaded profile config "functional-397013": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1127 11:04:55.915534  137300 config.go:182] Loaded profile config "functional-397013": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1127 11:04:55.915971  137300 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1127 11:04:55.916020  137300 main.go:141] libmachine: Launching plugin server for driver kvm2
I1127 11:04:55.931338  137300 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39531
I1127 11:04:55.931801  137300 main.go:141] libmachine: () Calling .GetVersion
I1127 11:04:55.932358  137300 main.go:141] libmachine: Using API Version  1
I1127 11:04:55.932459  137300 main.go:141] libmachine: () Calling .SetConfigRaw
I1127 11:04:55.932806  137300 main.go:141] libmachine: () Calling .GetMachineName
I1127 11:04:55.933007  137300 main.go:141] libmachine: (functional-397013) Calling .GetState
I1127 11:04:55.934768  137300 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1127 11:04:55.934818  137300 main.go:141] libmachine: Launching plugin server for driver kvm2
I1127 11:04:55.949520  137300 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32879
I1127 11:04:55.949883  137300 main.go:141] libmachine: () Calling .GetVersion
I1127 11:04:55.950343  137300 main.go:141] libmachine: Using API Version  1
I1127 11:04:55.950370  137300 main.go:141] libmachine: () Calling .SetConfigRaw
I1127 11:04:55.950683  137300 main.go:141] libmachine: () Calling .GetMachineName
I1127 11:04:55.951006  137300 main.go:141] libmachine: (functional-397013) Calling .DriverName
I1127 11:04:55.951210  137300 ssh_runner.go:195] Run: systemctl --version
I1127 11:04:55.951236  137300 main.go:141] libmachine: (functional-397013) Calling .GetSSHHostname
I1127 11:04:55.953360  137300 main.go:141] libmachine: (functional-397013) DBG | domain functional-397013 has defined MAC address 52:54:00:15:58:b3 in network mk-functional-397013
I1127 11:04:55.953672  137300 main.go:141] libmachine: (functional-397013) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:58:b3", ip: ""} in network mk-functional-397013: {Iface:virbr1 ExpiryTime:2023-11-27 12:01:47 +0000 UTC Type:0 Mac:52:54:00:15:58:b3 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:functional-397013 Clientid:01:52:54:00:15:58:b3}
I1127 11:04:55.953706  137300 main.go:141] libmachine: (functional-397013) DBG | domain functional-397013 has defined IP address 192.168.39.107 and MAC address 52:54:00:15:58:b3 in network mk-functional-397013
I1127 11:04:55.953841  137300 main.go:141] libmachine: (functional-397013) Calling .GetSSHPort
I1127 11:04:55.953983  137300 main.go:141] libmachine: (functional-397013) Calling .GetSSHKeyPath
I1127 11:04:55.954090  137300 main.go:141] libmachine: (functional-397013) Calling .GetSSHUsername
I1127 11:04:55.954183  137300 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17644-122411/.minikube/machines/functional-397013/id_rsa Username:docker}
I1127 11:04:56.037018  137300 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I1127 11:04:56.059558  137300 main.go:141] libmachine: Making call to close driver server
I1127 11:04:56.059583  137300 main.go:141] libmachine: (functional-397013) Calling .Close
I1127 11:04:56.059862  137300 main.go:141] libmachine: Successfully made call to close driver server
I1127 11:04:56.059885  137300 main.go:141] libmachine: Making call to close connection to plugin binary
I1127 11:04:56.059888  137300 main.go:141] libmachine: (functional-397013) DBG | Closing plugin on server side
I1127 11:04:56.059895  137300 main.go:141] libmachine: Making call to close driver server
I1127 11:04:56.059903  137300 main.go:141] libmachine: (functional-397013) Calling .Close
I1127 11:04:56.060109  137300 main.go:141] libmachine: Successfully made call to close driver server
I1127 11:04:56.060122  137300 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-397013 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-397013 ssh pgrep buildkitd: exit status 1 (195.571445ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-397013 image build -t localhost/my-image:functional-397013 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-397013 image build -t localhost/my-image:functional-397013 testdata/build --alsologtostderr: (2.519356247s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-397013 image build -t localhost/my-image:functional-397013 testdata/build --alsologtostderr:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
5cc84ad355aa: Pulling fs layer
5cc84ad355aa: Verifying Checksum
5cc84ad355aa: Download complete
5cc84ad355aa: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> beae173ccac6
Step 2/3 : RUN true
---> Running in 92967f83114b
Removing intermediate container 92967f83114b
---> a5e847e9b6bd
Step 3/3 : ADD content.txt /
---> e62bd4c0222a
Successfully built e62bd4c0222a
Successfully tagged localhost/my-image:functional-397013
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-397013 image build -t localhost/my-image:functional-397013 testdata/build --alsologtostderr:
I1127 11:04:56.316164  137353 out.go:296] Setting OutFile to fd 1 ...
I1127 11:04:56.316320  137353 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1127 11:04:56.316330  137353 out.go:309] Setting ErrFile to fd 2...
I1127 11:04:56.316334  137353 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1127 11:04:56.316512  137353 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17644-122411/.minikube/bin
I1127 11:04:56.317119  137353 config.go:182] Loaded profile config "functional-397013": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1127 11:04:56.317606  137353 config.go:182] Loaded profile config "functional-397013": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1127 11:04:56.318122  137353 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1127 11:04:56.318169  137353 main.go:141] libmachine: Launching plugin server for driver kvm2
I1127 11:04:56.332629  137353 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36377
I1127 11:04:56.333079  137353 main.go:141] libmachine: () Calling .GetVersion
I1127 11:04:56.333630  137353 main.go:141] libmachine: Using API Version  1
I1127 11:04:56.333654  137353 main.go:141] libmachine: () Calling .SetConfigRaw
I1127 11:04:56.333978  137353 main.go:141] libmachine: () Calling .GetMachineName
I1127 11:04:56.334180  137353 main.go:141] libmachine: (functional-397013) Calling .GetState
I1127 11:04:56.336040  137353 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1127 11:04:56.336092  137353 main.go:141] libmachine: Launching plugin server for driver kvm2
I1127 11:04:56.350411  137353 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42279
I1127 11:04:56.350844  137353 main.go:141] libmachine: () Calling .GetVersion
I1127 11:04:56.351380  137353 main.go:141] libmachine: Using API Version  1
I1127 11:04:56.351413  137353 main.go:141] libmachine: () Calling .SetConfigRaw
I1127 11:04:56.351730  137353 main.go:141] libmachine: () Calling .GetMachineName
I1127 11:04:56.351939  137353 main.go:141] libmachine: (functional-397013) Calling .DriverName
I1127 11:04:56.352182  137353 ssh_runner.go:195] Run: systemctl --version
I1127 11:04:56.352217  137353 main.go:141] libmachine: (functional-397013) Calling .GetSSHHostname
I1127 11:04:56.354990  137353 main.go:141] libmachine: (functional-397013) DBG | domain functional-397013 has defined MAC address 52:54:00:15:58:b3 in network mk-functional-397013
I1127 11:04:56.355385  137353 main.go:141] libmachine: (functional-397013) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:58:b3", ip: ""} in network mk-functional-397013: {Iface:virbr1 ExpiryTime:2023-11-27 12:01:47 +0000 UTC Type:0 Mac:52:54:00:15:58:b3 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:functional-397013 Clientid:01:52:54:00:15:58:b3}
I1127 11:04:56.355414  137353 main.go:141] libmachine: (functional-397013) DBG | domain functional-397013 has defined IP address 192.168.39.107 and MAC address 52:54:00:15:58:b3 in network mk-functional-397013
I1127 11:04:56.355560  137353 main.go:141] libmachine: (functional-397013) Calling .GetSSHPort
I1127 11:04:56.355729  137353 main.go:141] libmachine: (functional-397013) Calling .GetSSHKeyPath
I1127 11:04:56.355897  137353 main.go:141] libmachine: (functional-397013) Calling .GetSSHUsername
I1127 11:04:56.356047  137353 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17644-122411/.minikube/machines/functional-397013/id_rsa Username:docker}
I1127 11:04:56.459119  137353 build_images.go:151] Building image from path: /tmp/build.2338458805.tar
I1127 11:04:56.459205  137353 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1127 11:04:56.469763  137353 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2338458805.tar
I1127 11:04:56.474405  137353 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2338458805.tar: stat -c "%s %y" /var/lib/minikube/build/build.2338458805.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2338458805.tar': No such file or directory
I1127 11:04:56.474439  137353 ssh_runner.go:362] scp /tmp/build.2338458805.tar --> /var/lib/minikube/build/build.2338458805.tar (3072 bytes)
I1127 11:04:56.505106  137353 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2338458805
I1127 11:04:56.521032  137353 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2338458805 -xf /var/lib/minikube/build/build.2338458805.tar
I1127 11:04:56.544151  137353 docker.go:346] Building image: /var/lib/minikube/build/build.2338458805
I1127 11:04:56.544215  137353 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-397013 /var/lib/minikube/build/build.2338458805
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
Install the buildx component to build images with BuildKit:
https://docs.docker.com/go/buildx/

                                                
                                                
I1127 11:04:58.753954  137353 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-397013 /var/lib/minikube/build/build.2338458805: (2.209712379s)
I1127 11:04:58.754019  137353 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2338458805
I1127 11:04:58.765360  137353 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2338458805.tar
I1127 11:04:58.775646  137353 build_images.go:207] Built localhost/my-image:functional-397013 from /tmp/build.2338458805.tar
I1127 11:04:58.775676  137353 build_images.go:123] succeeded building to: functional-397013
I1127 11:04:58.775680  137353 build_images.go:124] failed building to: 
I1127 11:04:58.775705  137353 main.go:141] libmachine: Making call to close driver server
I1127 11:04:58.775716  137353 main.go:141] libmachine: (functional-397013) Calling .Close
I1127 11:04:58.776000  137353 main.go:141] libmachine: Successfully made call to close driver server
I1127 11:04:58.776019  137353 main.go:141] libmachine: Making call to close connection to plugin binary
I1127 11:04:58.776027  137353 main.go:141] libmachine: Making call to close driver server
I1127 11:04:58.776046  137353 main.go:141] libmachine: (functional-397013) Calling .Close
I1127 11:04:58.776269  137353 main.go:141] libmachine: Successfully made call to close driver server
I1127 11:04:58.776280  137353 main.go:141] libmachine: Making call to close connection to plugin binary
I1127 11:04:58.776351  137353 main.go:141] libmachine: (functional-397013) DBG | Closing plugin on server side
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-397013 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.97s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.353589506s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-397013
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.38s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-397013 docker-env) && out/minikube-linux-amd64 status -p functional-397013"
functional_test.go:518: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-397013 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.98s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-397013 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-397013 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-397013 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (12.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1438: (dbg) Run:  kubectl --context functional-397013 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-397013 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-nk94s" [97cf7af1-6c0f-4bdb-bcba-73e5d0d35bab] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-nk94s" [97cf7af1-6c0f-4bdb-bcba-73e5d0d35bab] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 12.029867748s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (12.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-397013 image load --daemon gcr.io/google-containers/addon-resizer:functional-397013 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-397013 image load --daemon gcr.io/google-containers/addon-resizer:functional-397013 --alsologtostderr: (4.248846469s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-397013 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.46s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-397013 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-397013 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-397013 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 135740: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-397013 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-397013 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (35.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-397013 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [5e099eac-f828-4829-8961-6002674bfa7e] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [5e099eac-f828-4829-8961-6002674bfa7e] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 35.026503879s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (35.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-397013 image load --daemon gcr.io/google-containers/addon-resizer:functional-397013 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-397013 image load --daemon gcr.io/google-containers/addon-resizer:functional-397013 --alsologtostderr: (2.245240823s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-397013 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.253857161s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-397013
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-397013 image load --daemon gcr.io/google-containers/addon-resizer:functional-397013 --alsologtostderr
E1127 11:04:25.967907  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/addons-097795/client.crt: no such file or directory
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-397013 image load --daemon gcr.io/google-containers/addon-resizer:functional-397013 --alsologtostderr: (4.453598157s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-397013 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.95s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-linux-amd64 -p functional-397013 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-linux-amd64 -p functional-397013 service list -o json
functional_test.go:1493: Took "474.732093ms" to run "out/minikube-linux-amd64 -p functional-397013 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-397013 image save gcr.io/google-containers/addon-resizer:functional-397013 /home/jenkins/workspace/KVM_Linux_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-397013 image save gcr.io/google-containers/addon-resizer:functional-397013 /home/jenkins/workspace/KVM_Linux_integration/addon-resizer-save.tar --alsologtostderr: (1.864833878s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.86s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-linux-amd64 -p functional-397013 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.39.107:32471
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-linux-amd64 -p functional-397013 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-linux-amd64 -p functional-397013 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.39.107:32471
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-397013 image rm gcr.io/google-containers/addon-resizer:functional-397013 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-397013 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-397013 image load /home/jenkins/workspace/KVM_Linux_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-397013 image load /home/jenkins/workspace/KVM_Linux_integration/addon-resizer-save.tar --alsologtostderr: (1.491335035s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-397013 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.71s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-397013
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-397013 image save --daemon gcr.io/google-containers/addon-resizer:functional-397013 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-amd64 -p functional-397013 image save --daemon gcr.io/google-containers/addon-resizer:functional-397013 --alsologtostderr: (1.870986332s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-397013
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.90s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1314: Took "265.625667ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1328: Took "58.183627ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1365: Took "287.552047ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1378: Took "63.407181ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (13.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-397013 /tmp/TestFunctionalparallelMountCmdany-port2816373578/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1701083083769211725" to /tmp/TestFunctionalparallelMountCmdany-port2816373578/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1701083083769211725" to /tmp/TestFunctionalparallelMountCmdany-port2816373578/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1701083083769211725" to /tmp/TestFunctionalparallelMountCmdany-port2816373578/001/test-1701083083769211725
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-397013 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-397013 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (218.702784ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-397013 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-397013 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Nov 27 11:04 created-by-test
-rw-r--r-- 1 docker docker 24 Nov 27 11:04 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Nov 27 11:04 test-1701083083769211725
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-397013 ssh cat /mount-9p/test-1701083083769211725
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-397013 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [23469f54-8f5b-4c69-9680-28125c523d1b] Pending
helpers_test.go:344: "busybox-mount" [23469f54-8f5b-4c69-9680-28125c523d1b] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [23469f54-8f5b-4c69-9680-28125c523d1b] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [23469f54-8f5b-4c69-9680-28125c523d1b] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 11.029296636s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-397013 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-397013 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-397013 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-397013 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-397013 /tmp/TestFunctionalparallelMountCmdany-port2816373578/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (13.73s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-397013 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.103.18.22 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-397013 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-397013 /tmp/TestFunctionalparallelMountCmdspecific-port3178562032/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-397013 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-397013 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (223.186499ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-397013 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-397013 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-397013 /tmp/TestFunctionalparallelMountCmdspecific-port3178562032/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-397013 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-397013 ssh "sudo umount -f /mount-9p": exit status 1 (222.694564ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-397013 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-397013 /tmp/TestFunctionalparallelMountCmdspecific-port3178562032/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.62s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-397013 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3609705221/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-397013 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3609705221/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-397013 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3609705221/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-397013 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-397013 ssh "findmnt -T" /mount1: exit status 1 (306.176895ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-397013 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-397013 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-397013 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-397013 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-397013 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3609705221/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-397013 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3609705221/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-397013 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3609705221/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.76s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-397013
--- PASS: TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-397013
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-397013
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestGvisorAddon (328.46s)

                                                
                                                
=== RUN   TestGvisorAddon
=== PAUSE TestGvisorAddon

                                                
                                                

                                                
                                                
=== CONT  TestGvisorAddon
gvisor_addon_test.go:52: (dbg) Run:  out/minikube-linux-amd64 start -p gvisor-691176 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 
gvisor_addon_test.go:52: (dbg) Done: out/minikube-linux-amd64 start -p gvisor-691176 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 : (2m4.716855759s)
gvisor_addon_test.go:58: (dbg) Run:  out/minikube-linux-amd64 -p gvisor-691176 cache add gcr.io/k8s-minikube/gvisor-addon:2
E1127 11:32:51.957060  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/ingress-addon-legacy-968829/client.crt: no such file or directory
gvisor_addon_test.go:58: (dbg) Done: out/minikube-linux-amd64 -p gvisor-691176 cache add gcr.io/k8s-minikube/gvisor-addon:2: (24.025900608s)
gvisor_addon_test.go:63: (dbg) Run:  out/minikube-linux-amd64 -p gvisor-691176 addons enable gvisor
E1127 11:33:04.045261  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/addons-097795/client.crt: no such file or directory
gvisor_addon_test.go:63: (dbg) Done: out/minikube-linux-amd64 -p gvisor-691176 addons enable gvisor: (4.447406469s)
gvisor_addon_test.go:68: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "kubernetes.io/minikube-addons=gvisor" in namespace "kube-system" ...
helpers_test.go:344: "gvisor" [a729f416-5dae-4f8b-bba1-fd28fec3c906] Running
gvisor_addon_test.go:68: (dbg) TestGvisorAddon: kubernetes.io/minikube-addons=gvisor healthy within 5.023007965s
gvisor_addon_test.go:73: (dbg) Run:  kubectl --context gvisor-691176 replace --force -f testdata/nginx-gvisor.yaml
gvisor_addon_test.go:78: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "run=nginx,runtime=gvisor" in namespace "default" ...
helpers_test.go:344: "nginx-gvisor" [8deb1884-a583-4706-b497-263158be5d6d] Pending
helpers_test.go:344: "nginx-gvisor" [8deb1884-a583-4706-b497-263158be5d6d] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-gvisor" [8deb1884-a583-4706-b497-263158be5d6d] Running
gvisor_addon_test.go:78: (dbg) TestGvisorAddon: run=nginx,runtime=gvisor healthy within 14.051343202s
gvisor_addon_test.go:83: (dbg) Run:  out/minikube-linux-amd64 stop -p gvisor-691176
gvisor_addon_test.go:83: (dbg) Done: out/minikube-linux-amd64 stop -p gvisor-691176: (1m34.751882885s)
gvisor_addon_test.go:88: (dbg) Run:  out/minikube-linux-amd64 start -p gvisor-691176 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 
E1127 11:35:22.534856  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/skaffold-861907/client.crt: no such file or directory
E1127 11:35:22.540168  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/skaffold-861907/client.crt: no such file or directory
E1127 11:35:22.550435  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/skaffold-861907/client.crt: no such file or directory
E1127 11:35:22.570698  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/skaffold-861907/client.crt: no such file or directory
E1127 11:35:22.611007  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/skaffold-861907/client.crt: no such file or directory
E1127 11:35:22.691371  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/skaffold-861907/client.crt: no such file or directory
E1127 11:35:22.852113  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/skaffold-861907/client.crt: no such file or directory
E1127 11:35:23.172723  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/skaffold-861907/client.crt: no such file or directory
E1127 11:35:23.813486  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/skaffold-861907/client.crt: no such file or directory
E1127 11:35:25.093785  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/skaffold-861907/client.crt: no such file or directory
E1127 11:35:27.654713  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/skaffold-861907/client.crt: no such file or directory
E1127 11:35:32.775238  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/skaffold-861907/client.crt: no such file or directory
gvisor_addon_test.go:88: (dbg) Done: out/minikube-linux-amd64 start -p gvisor-691176 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 : (49.681717266s)
gvisor_addon_test.go:92: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "kubernetes.io/minikube-addons=gvisor" in namespace "kube-system" ...
helpers_test.go:344: "gvisor" [a729f416-5dae-4f8b-bba1-fd28fec3c906] Running
gvisor_addon_test.go:92: (dbg) TestGvisorAddon: kubernetes.io/minikube-addons=gvisor healthy within 5.032547758s
gvisor_addon_test.go:95: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "run=nginx,runtime=gvisor" in namespace "default" ...
helpers_test.go:344: "nginx-gvisor" [8deb1884-a583-4706-b497-263158be5d6d] Running / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
gvisor_addon_test.go:95: (dbg) TestGvisorAddon: run=nginx,runtime=gvisor healthy within 5.01231705s
helpers_test.go:175: Cleaning up "gvisor-691176" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p gvisor-691176
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p gvisor-691176: (1.435695431s)
--- PASS: TestGvisorAddon (328.46s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (51.42s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -p image-084623 --driver=kvm2 
E1127 11:05:47.891326  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/addons-097795/client.crt: no such file or directory
image_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -p image-084623 --driver=kvm2 : (51.417011067s)
--- PASS: TestImageBuild/serial/Setup (51.42s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.62s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-084623
image_test.go:78: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-084623: (1.621436057s)
--- PASS: TestImageBuild/serial/NormalBuild (1.62s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (1.36s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-084623
image_test.go:99: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-084623: (1.364594253s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (1.36s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.4s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-084623
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.40s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.3s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-084623
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.30s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (83.21s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-968829 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2 
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-968829 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2 : (1m23.212435068s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (83.21s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (17.42s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-968829 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-968829 addons enable ingress --alsologtostderr -v=5: (17.41716396s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (17.42s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.52s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-968829 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.52s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (36.05s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:206: (dbg) Run:  kubectl --context ingress-addon-legacy-968829 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:206: (dbg) Done: kubectl --context ingress-addon-legacy-968829 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (11.221908963s)
addons_test.go:231: (dbg) Run:  kubectl --context ingress-addon-legacy-968829 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:244: (dbg) Run:  kubectl --context ingress-addon-legacy-968829 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:249: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [98214f95-aff2-49a2-a596-46706157a513] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
E1127 11:08:04.045545  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/addons-097795/client.crt: no such file or directory
helpers_test.go:344: "nginx" [98214f95-aff2-49a2-a596-46706157a513] Running
addons_test.go:249: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 10.019673172s
addons_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-968829 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:285: (dbg) Run:  kubectl --context ingress-addon-legacy-968829 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-968829 ip
addons_test.go:296: (dbg) Run:  nslookup hello-john.test 192.168.50.7
addons_test.go:305: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-968829 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:305: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-968829 addons disable ingress-dns --alsologtostderr -v=1: (6.237754412s)
addons_test.go:310: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-968829 addons disable ingress --alsologtostderr -v=1
addons_test.go:310: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-968829 addons disable ingress --alsologtostderr -v=1: (7.46390306s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddons (36.05s)

                                                
                                    
x
+
TestJSONOutput/start/Command (68.15s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-625743 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2 
E1127 11:08:31.733411  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/addons-097795/client.crt: no such file or directory
E1127 11:09:14.479488  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/functional-397013/client.crt: no such file or directory
E1127 11:09:14.484860  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/functional-397013/client.crt: no such file or directory
E1127 11:09:14.495182  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/functional-397013/client.crt: no such file or directory
E1127 11:09:14.515501  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/functional-397013/client.crt: no such file or directory
E1127 11:09:14.555821  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/functional-397013/client.crt: no such file or directory
E1127 11:09:14.636225  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/functional-397013/client.crt: no such file or directory
E1127 11:09:14.796673  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/functional-397013/client.crt: no such file or directory
E1127 11:09:15.117306  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/functional-397013/client.crt: no such file or directory
E1127 11:09:15.757699  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/functional-397013/client.crt: no such file or directory
E1127 11:09:17.037956  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/functional-397013/client.crt: no such file or directory
E1127 11:09:19.598163  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/functional-397013/client.crt: no such file or directory
E1127 11:09:24.719286  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/functional-397013/client.crt: no such file or directory
E1127 11:09:34.960299  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/functional-397013/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-625743 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2 : (1m8.153458939s)
--- PASS: TestJSONOutput/start/Command (68.15s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.61s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-625743 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.61s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.55s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-625743 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.55s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (8.11s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-625743 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-625743 --output=json --user=testUser: (8.109860081s)
--- PASS: TestJSONOutput/stop/Command (8.11s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-446316 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-446316 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (76.877622ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"d39de8df-d11e-4908-91b1-5ee1bf1c8a56","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-446316] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"970ffc87-0538-4c5d-a6ea-6daeba90d949","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17644"}}
	{"specversion":"1.0","id":"6c5ba611-ccf6-464d-be9e-f41f9fab078c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"1c6f3c39-159c-47e9-b0ec-4eaec54ae609","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17644-122411/kubeconfig"}}
	{"specversion":"1.0","id":"eab68f5c-175b-484d-9332-e1f84169676b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17644-122411/.minikube"}}
	{"specversion":"1.0","id":"3081d799-c5ed-45ec-bb7e-e96b10c9a61e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"8a5c39e1-248e-411d-ab88-0e8398ec4f6e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"63b36062-8ff1-49db-9b3c-43ea83151b29","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-446316" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-446316
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (102.58s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-887507 --driver=kvm2 
E1127 11:09:55.440831  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/functional-397013/client.crt: no such file or directory
E1127 11:10:36.401619  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/functional-397013/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-887507 --driver=kvm2 : (50.999498041s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-889776 --driver=kvm2 
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-889776 --driver=kvm2 : (48.680325366s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-887507
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-889776
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-889776" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-889776
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-889776: (1.000252367s)
helpers_test.go:175: Cleaning up "first-887507" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-887507
--- PASS: TestMinikubeProfile (102.58s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (28.82s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-431981 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2 
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-431981 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2 : (27.82332335s)
E1127 11:11:58.322181  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/functional-397013/client.crt: no such file or directory
--- PASS: TestMountStart/serial/StartWithMountFirst (28.82s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.43s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-431981 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-431981 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.43s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (28.06s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-448744 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2 
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-448744 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2 : (27.05965079s)
--- PASS: TestMountStart/serial/StartWithMountSecond (28.06s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.41s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-448744 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-448744 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.41s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.67s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-431981 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.67s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.42s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-448744 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-448744 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.42s)

                                                
                                    
x
+
TestMountStart/serial/Stop (2.1s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-448744
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-448744: (2.095592884s)
--- PASS: TestMountStart/serial/Stop (2.10s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (23.54s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-448744
E1127 11:12:51.959047  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/ingress-addon-legacy-968829/client.crt: no such file or directory
E1127 11:12:51.964302  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/ingress-addon-legacy-968829/client.crt: no such file or directory
E1127 11:12:51.974560  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/ingress-addon-legacy-968829/client.crt: no such file or directory
E1127 11:12:51.994821  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/ingress-addon-legacy-968829/client.crt: no such file or directory
E1127 11:12:52.035064  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/ingress-addon-legacy-968829/client.crt: no such file or directory
E1127 11:12:52.115353  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/ingress-addon-legacy-968829/client.crt: no such file or directory
E1127 11:12:52.275813  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/ingress-addon-legacy-968829/client.crt: no such file or directory
E1127 11:12:52.596449  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/ingress-addon-legacy-968829/client.crt: no such file or directory
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-448744: (22.54179291s)
E1127 11:12:53.237113  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/ingress-addon-legacy-968829/client.crt: no such file or directory
--- PASS: TestMountStart/serial/RestartStopped (23.54s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.42s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-448744 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-448744 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.42s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (125.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-397554 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2 
E1127 11:12:57.077888  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/ingress-addon-legacy-968829/client.crt: no such file or directory
E1127 11:13:02.198116  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/ingress-addon-legacy-968829/client.crt: no such file or directory
E1127 11:13:04.045050  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/addons-097795/client.crt: no such file or directory
E1127 11:13:12.439218  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/ingress-addon-legacy-968829/client.crt: no such file or directory
E1127 11:13:32.920242  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/ingress-addon-legacy-968829/client.crt: no such file or directory
E1127 11:14:13.881391  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/ingress-addon-legacy-968829/client.crt: no such file or directory
E1127 11:14:14.479348  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/functional-397013/client.crt: no such file or directory
E1127 11:14:42.162471  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/functional-397013/client.crt: no such file or directory
multinode_test.go:85: (dbg) Done: out/minikube-linux-amd64 start -p multinode-397554 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2 : (2m4.833674721s)
multinode_test.go:91: (dbg) Run:  out/minikube-linux-amd64 -p multinode-397554 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (125.28s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:481: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-397554 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:486: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-397554 -- rollout status deployment/busybox
multinode_test.go:486: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-397554 -- rollout status deployment/busybox: (3.03770927s)
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-397554 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:516: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-397554 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:524: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-397554 -- exec busybox-5bc68d56bd-j4n94 -- nslookup kubernetes.io
multinode_test.go:524: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-397554 -- exec busybox-5bc68d56bd-xd7kb -- nslookup kubernetes.io
multinode_test.go:534: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-397554 -- exec busybox-5bc68d56bd-j4n94 -- nslookup kubernetes.default
multinode_test.go:534: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-397554 -- exec busybox-5bc68d56bd-xd7kb -- nslookup kubernetes.default
multinode_test.go:542: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-397554 -- exec busybox-5bc68d56bd-j4n94 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:542: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-397554 -- exec busybox-5bc68d56bd-xd7kb -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.94s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-397554 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:560: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-397554 -- exec busybox-5bc68d56bd-j4n94 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-397554 -- exec busybox-5bc68d56bd-j4n94 -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:560: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-397554 -- exec busybox-5bc68d56bd-xd7kb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-397554 -- exec busybox-5bc68d56bd-xd7kb -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.95s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (47.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:110: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-397554 -v 3 --alsologtostderr
E1127 11:15:35.802200  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/ingress-addon-legacy-968829/client.crt: no such file or directory
multinode_test.go:110: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-397554 -v 3 --alsologtostderr: (46.981867766s)
multinode_test.go:116: (dbg) Run:  out/minikube-linux-amd64 -p multinode-397554 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (47.58s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:132: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.22s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.86s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:173: (dbg) Run:  out/minikube-linux-amd64 -p multinode-397554 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-397554 cp testdata/cp-test.txt multinode-397554:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-397554 ssh -n multinode-397554 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-397554 cp multinode-397554:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3434617344/001/cp-test_multinode-397554.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-397554 ssh -n multinode-397554 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-397554 cp multinode-397554:/home/docker/cp-test.txt multinode-397554-m02:/home/docker/cp-test_multinode-397554_multinode-397554-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-397554 ssh -n multinode-397554 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-397554 ssh -n multinode-397554-m02 "sudo cat /home/docker/cp-test_multinode-397554_multinode-397554-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-397554 cp multinode-397554:/home/docker/cp-test.txt multinode-397554-m03:/home/docker/cp-test_multinode-397554_multinode-397554-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-397554 ssh -n multinode-397554 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-397554 ssh -n multinode-397554-m03 "sudo cat /home/docker/cp-test_multinode-397554_multinode-397554-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-397554 cp testdata/cp-test.txt multinode-397554-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-397554 ssh -n multinode-397554-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-397554 cp multinode-397554-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3434617344/001/cp-test_multinode-397554-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-397554 ssh -n multinode-397554-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-397554 cp multinode-397554-m02:/home/docker/cp-test.txt multinode-397554:/home/docker/cp-test_multinode-397554-m02_multinode-397554.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-397554 ssh -n multinode-397554-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-397554 ssh -n multinode-397554 "sudo cat /home/docker/cp-test_multinode-397554-m02_multinode-397554.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-397554 cp multinode-397554-m02:/home/docker/cp-test.txt multinode-397554-m03:/home/docker/cp-test_multinode-397554-m02_multinode-397554-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-397554 ssh -n multinode-397554-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-397554 ssh -n multinode-397554-m03 "sudo cat /home/docker/cp-test_multinode-397554-m02_multinode-397554-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-397554 cp testdata/cp-test.txt multinode-397554-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-397554 ssh -n multinode-397554-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-397554 cp multinode-397554-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3434617344/001/cp-test_multinode-397554-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-397554 ssh -n multinode-397554-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-397554 cp multinode-397554-m03:/home/docker/cp-test.txt multinode-397554:/home/docker/cp-test_multinode-397554-m03_multinode-397554.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-397554 ssh -n multinode-397554-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-397554 ssh -n multinode-397554 "sudo cat /home/docker/cp-test_multinode-397554-m03_multinode-397554.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-397554 cp multinode-397554-m03:/home/docker/cp-test.txt multinode-397554-m02:/home/docker/cp-test_multinode-397554-m03_multinode-397554-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-397554 ssh -n multinode-397554-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-397554 ssh -n multinode-397554-m02 "sudo cat /home/docker/cp-test_multinode-397554-m03_multinode-397554-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.86s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (4s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:210: (dbg) Run:  out/minikube-linux-amd64 -p multinode-397554 node stop m03
multinode_test.go:210: (dbg) Done: out/minikube-linux-amd64 -p multinode-397554 node stop m03: (3.096579298s)
multinode_test.go:216: (dbg) Run:  out/minikube-linux-amd64 -p multinode-397554 status
multinode_test.go:216: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-397554 status: exit status 7 (451.988543ms)

                                                
                                                
-- stdout --
	multinode-397554
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-397554-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-397554-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:223: (dbg) Run:  out/minikube-linux-amd64 -p multinode-397554 status --alsologtostderr
multinode_test.go:223: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-397554 status --alsologtostderr: exit status 7 (453.244251ms)

                                                
                                                
-- stdout --
	multinode-397554
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-397554-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-397554-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1127 11:16:05.908601  144765 out.go:296] Setting OutFile to fd 1 ...
	I1127 11:16:05.908719  144765 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1127 11:16:05.908730  144765 out.go:309] Setting ErrFile to fd 2...
	I1127 11:16:05.908737  144765 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1127 11:16:05.908944  144765 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17644-122411/.minikube/bin
	I1127 11:16:05.909144  144765 out.go:303] Setting JSON to false
	I1127 11:16:05.909177  144765 mustload.go:65] Loading cluster: multinode-397554
	I1127 11:16:05.909276  144765 notify.go:220] Checking for updates...
	I1127 11:16:05.909667  144765 config.go:182] Loaded profile config "multinode-397554": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1127 11:16:05.909685  144765 status.go:255] checking status of multinode-397554 ...
	I1127 11:16:05.910134  144765 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1127 11:16:05.910188  144765 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 11:16:05.935635  144765 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41021
	I1127 11:16:05.936079  144765 main.go:141] libmachine: () Calling .GetVersion
	I1127 11:16:05.936690  144765 main.go:141] libmachine: Using API Version  1
	I1127 11:16:05.936716  144765 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 11:16:05.937097  144765 main.go:141] libmachine: () Calling .GetMachineName
	I1127 11:16:05.937298  144765 main.go:141] libmachine: (multinode-397554) Calling .GetState
	I1127 11:16:05.939199  144765 status.go:330] multinode-397554 host status = "Running" (err=<nil>)
	I1127 11:16:05.939224  144765 host.go:66] Checking if "multinode-397554" exists ...
	I1127 11:16:05.939807  144765 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1127 11:16:05.939869  144765 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 11:16:05.954374  144765 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41957
	I1127 11:16:05.954745  144765 main.go:141] libmachine: () Calling .GetVersion
	I1127 11:16:05.955151  144765 main.go:141] libmachine: Using API Version  1
	I1127 11:16:05.955182  144765 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 11:16:05.955509  144765 main.go:141] libmachine: () Calling .GetMachineName
	I1127 11:16:05.955735  144765 main.go:141] libmachine: (multinode-397554) Calling .GetIP
	I1127 11:16:05.958713  144765 main.go:141] libmachine: (multinode-397554) DBG | domain multinode-397554 has defined MAC address 52:54:00:45:e6:6c in network mk-multinode-397554
	I1127 11:16:05.959128  144765 main.go:141] libmachine: (multinode-397554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:e6:6c", ip: ""} in network mk-multinode-397554: {Iface:virbr1 ExpiryTime:2023-11-27 12:13:11 +0000 UTC Type:0 Mac:52:54:00:45:e6:6c Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:multinode-397554 Clientid:01:52:54:00:45:e6:6c}
	I1127 11:16:05.959181  144765 main.go:141] libmachine: (multinode-397554) DBG | domain multinode-397554 has defined IP address 192.168.39.180 and MAC address 52:54:00:45:e6:6c in network mk-multinode-397554
	I1127 11:16:05.959324  144765 host.go:66] Checking if "multinode-397554" exists ...
	I1127 11:16:05.959612  144765 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1127 11:16:05.959659  144765 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 11:16:05.973776  144765 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39985
	I1127 11:16:05.974132  144765 main.go:141] libmachine: () Calling .GetVersion
	I1127 11:16:05.974658  144765 main.go:141] libmachine: Using API Version  1
	I1127 11:16:05.974689  144765 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 11:16:05.975008  144765 main.go:141] libmachine: () Calling .GetMachineName
	I1127 11:16:05.975277  144765 main.go:141] libmachine: (multinode-397554) Calling .DriverName
	I1127 11:16:05.975461  144765 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1127 11:16:05.975484  144765 main.go:141] libmachine: (multinode-397554) Calling .GetSSHHostname
	I1127 11:16:05.978020  144765 main.go:141] libmachine: (multinode-397554) DBG | domain multinode-397554 has defined MAC address 52:54:00:45:e6:6c in network mk-multinode-397554
	I1127 11:16:05.978489  144765 main.go:141] libmachine: (multinode-397554) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:e6:6c", ip: ""} in network mk-multinode-397554: {Iface:virbr1 ExpiryTime:2023-11-27 12:13:11 +0000 UTC Type:0 Mac:52:54:00:45:e6:6c Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:multinode-397554 Clientid:01:52:54:00:45:e6:6c}
	I1127 11:16:05.978522  144765 main.go:141] libmachine: (multinode-397554) DBG | domain multinode-397554 has defined IP address 192.168.39.180 and MAC address 52:54:00:45:e6:6c in network mk-multinode-397554
	I1127 11:16:05.978691  144765 main.go:141] libmachine: (multinode-397554) Calling .GetSSHPort
	I1127 11:16:05.978892  144765 main.go:141] libmachine: (multinode-397554) Calling .GetSSHKeyPath
	I1127 11:16:05.979059  144765 main.go:141] libmachine: (multinode-397554) Calling .GetSSHUsername
	I1127 11:16:05.979217  144765 sshutil.go:53] new ssh client: &{IP:192.168.39.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17644-122411/.minikube/machines/multinode-397554/id_rsa Username:docker}
	I1127 11:16:06.071010  144765 ssh_runner.go:195] Run: systemctl --version
	I1127 11:16:06.076661  144765 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1127 11:16:06.090750  144765 kubeconfig.go:92] found "multinode-397554" server: "https://192.168.39.180:8443"
	I1127 11:16:06.090785  144765 api_server.go:166] Checking apiserver status ...
	I1127 11:16:06.090823  144765 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1127 11:16:06.104106  144765 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1902/cgroup
	I1127 11:16:06.113148  144765 api_server.go:182] apiserver freezer: "9:freezer:/kubepods/burstable/pod8c89dd2ca1a302312fbdb8919789a3f4/a280b272f36cde06fe5e5613b5d0af67a7115e7965a2bbc32928bcd0a0c81b97"
	I1127 11:16:06.113211  144765 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod8c89dd2ca1a302312fbdb8919789a3f4/a280b272f36cde06fe5e5613b5d0af67a7115e7965a2bbc32928bcd0a0c81b97/freezer.state
	I1127 11:16:06.123861  144765 api_server.go:204] freezer state: "THAWED"
	I1127 11:16:06.123887  144765 api_server.go:253] Checking apiserver healthz at https://192.168.39.180:8443/healthz ...
	I1127 11:16:06.128614  144765 api_server.go:279] https://192.168.39.180:8443/healthz returned 200:
	ok
	I1127 11:16:06.128635  144765 status.go:421] multinode-397554 apiserver status = Running (err=<nil>)
	I1127 11:16:06.128647  144765 status.go:257] multinode-397554 status: &{Name:multinode-397554 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1127 11:16:06.128680  144765 status.go:255] checking status of multinode-397554-m02 ...
	I1127 11:16:06.129027  144765 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1127 11:16:06.129060  144765 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 11:16:06.143615  144765 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44587
	I1127 11:16:06.144070  144765 main.go:141] libmachine: () Calling .GetVersion
	I1127 11:16:06.144480  144765 main.go:141] libmachine: Using API Version  1
	I1127 11:16:06.144504  144765 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 11:16:06.144818  144765 main.go:141] libmachine: () Calling .GetMachineName
	I1127 11:16:06.144997  144765 main.go:141] libmachine: (multinode-397554-m02) Calling .GetState
	I1127 11:16:06.146510  144765 status.go:330] multinode-397554-m02 host status = "Running" (err=<nil>)
	I1127 11:16:06.146532  144765 host.go:66] Checking if "multinode-397554-m02" exists ...
	I1127 11:16:06.146787  144765 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1127 11:16:06.146809  144765 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 11:16:06.160727  144765 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36455
	I1127 11:16:06.161168  144765 main.go:141] libmachine: () Calling .GetVersion
	I1127 11:16:06.161625  144765 main.go:141] libmachine: Using API Version  1
	I1127 11:16:06.161647  144765 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 11:16:06.162045  144765 main.go:141] libmachine: () Calling .GetMachineName
	I1127 11:16:06.162212  144765 main.go:141] libmachine: (multinode-397554-m02) Calling .GetIP
	I1127 11:16:06.165174  144765 main.go:141] libmachine: (multinode-397554-m02) DBG | domain multinode-397554-m02 has defined MAC address 52:54:00:72:c9:87 in network mk-multinode-397554
	I1127 11:16:06.165771  144765 main.go:141] libmachine: (multinode-397554-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:c9:87", ip: ""} in network mk-multinode-397554: {Iface:virbr1 ExpiryTime:2023-11-27 12:14:29 +0000 UTC Type:0 Mac:52:54:00:72:c9:87 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:multinode-397554-m02 Clientid:01:52:54:00:72:c9:87}
	I1127 11:16:06.165799  144765 main.go:141] libmachine: (multinode-397554-m02) DBG | domain multinode-397554-m02 has defined IP address 192.168.39.54 and MAC address 52:54:00:72:c9:87 in network mk-multinode-397554
	I1127 11:16:06.165957  144765 host.go:66] Checking if "multinode-397554-m02" exists ...
	I1127 11:16:06.166363  144765 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1127 11:16:06.166410  144765 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 11:16:06.180533  144765 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37017
	I1127 11:16:06.180956  144765 main.go:141] libmachine: () Calling .GetVersion
	I1127 11:16:06.181399  144765 main.go:141] libmachine: Using API Version  1
	I1127 11:16:06.181425  144765 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 11:16:06.181756  144765 main.go:141] libmachine: () Calling .GetMachineName
	I1127 11:16:06.181941  144765 main.go:141] libmachine: (multinode-397554-m02) Calling .DriverName
	I1127 11:16:06.182134  144765 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1127 11:16:06.182155  144765 main.go:141] libmachine: (multinode-397554-m02) Calling .GetSSHHostname
	I1127 11:16:06.184693  144765 main.go:141] libmachine: (multinode-397554-m02) DBG | domain multinode-397554-m02 has defined MAC address 52:54:00:72:c9:87 in network mk-multinode-397554
	I1127 11:16:06.185084  144765 main.go:141] libmachine: (multinode-397554-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:72:c9:87", ip: ""} in network mk-multinode-397554: {Iface:virbr1 ExpiryTime:2023-11-27 12:14:29 +0000 UTC Type:0 Mac:52:54:00:72:c9:87 Iaid: IPaddr:192.168.39.54 Prefix:24 Hostname:multinode-397554-m02 Clientid:01:52:54:00:72:c9:87}
	I1127 11:16:06.185126  144765 main.go:141] libmachine: (multinode-397554-m02) DBG | domain multinode-397554-m02 has defined IP address 192.168.39.54 and MAC address 52:54:00:72:c9:87 in network mk-multinode-397554
	I1127 11:16:06.185219  144765 main.go:141] libmachine: (multinode-397554-m02) Calling .GetSSHPort
	I1127 11:16:06.185429  144765 main.go:141] libmachine: (multinode-397554-m02) Calling .GetSSHKeyPath
	I1127 11:16:06.185569  144765 main.go:141] libmachine: (multinode-397554-m02) Calling .GetSSHUsername
	I1127 11:16:06.185705  144765 sshutil.go:53] new ssh client: &{IP:192.168.39.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17644-122411/.minikube/machines/multinode-397554-m02/id_rsa Username:docker}
	I1127 11:16:06.274169  144765 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1127 11:16:06.286226  144765 status.go:257] multinode-397554-m02 status: &{Name:multinode-397554-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1127 11:16:06.286262  144765 status.go:255] checking status of multinode-397554-m03 ...
	I1127 11:16:06.286596  144765 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1127 11:16:06.286631  144765 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 11:16:06.302326  144765 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46187
	I1127 11:16:06.302736  144765 main.go:141] libmachine: () Calling .GetVersion
	I1127 11:16:06.303286  144765 main.go:141] libmachine: Using API Version  1
	I1127 11:16:06.303308  144765 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 11:16:06.303589  144765 main.go:141] libmachine: () Calling .GetMachineName
	I1127 11:16:06.303813  144765 main.go:141] libmachine: (multinode-397554-m03) Calling .GetState
	I1127 11:16:06.305471  144765 status.go:330] multinode-397554-m03 host status = "Stopped" (err=<nil>)
	I1127 11:16:06.305488  144765 status.go:343] host is not running, skipping remaining checks
	I1127 11:16:06.305496  144765 status.go:257] multinode-397554-m03 status: &{Name:multinode-397554-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (4.00s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (32.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-397554 node start m03 --alsologtostderr
multinode_test.go:254: (dbg) Done: out/minikube-linux-amd64 -p multinode-397554 node start m03 --alsologtostderr: (31.524452197s)
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-397554 status
multinode_test.go:275: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (32.17s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (171.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:283: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-397554
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-397554
multinode_test.go:290: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-397554: (27.817790037s)
multinode_test.go:295: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-397554 --wait=true -v=8 --alsologtostderr
E1127 11:17:51.956943  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/ingress-addon-legacy-968829/client.crt: no such file or directory
E1127 11:18:04.045101  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/addons-097795/client.crt: no such file or directory
E1127 11:18:19.642373  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/ingress-addon-legacy-968829/client.crt: no such file or directory
E1127 11:19:14.479079  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/functional-397013/client.crt: no such file or directory
E1127 11:19:27.093780  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/addons-097795/client.crt: no such file or directory
multinode_test.go:295: (dbg) Done: out/minikube-linux-amd64 start -p multinode-397554 --wait=true -v=8 --alsologtostderr: (2m23.776269779s)
multinode_test.go:300: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-397554
--- PASS: TestMultiNode/serial/RestartKeepsNodes (171.72s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (1.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:394: (dbg) Run:  out/minikube-linux-amd64 -p multinode-397554 node delete m03
multinode_test.go:394: (dbg) Done: out/minikube-linux-amd64 -p multinode-397554 node delete m03: (1.206651895s)
multinode_test.go:400: (dbg) Run:  out/minikube-linux-amd64 -p multinode-397554 status --alsologtostderr
multinode_test.go:424: (dbg) Run:  kubectl get nodes
multinode_test.go:432: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (1.76s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (25.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p multinode-397554 stop
multinode_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p multinode-397554 stop: (25.45196972s)
multinode_test.go:320: (dbg) Run:  out/minikube-linux-amd64 -p multinode-397554 status
multinode_test.go:320: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-397554 status: exit status 7 (94.730414ms)

                                                
                                                
-- stdout --
	multinode-397554
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-397554-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:327: (dbg) Run:  out/minikube-linux-amd64 -p multinode-397554 status --alsologtostderr
multinode_test.go:327: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-397554 status --alsologtostderr: exit status 7 (92.418205ms)

                                                
                                                
-- stdout --
	multinode-397554
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-397554-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1127 11:19:57.555624  146634 out.go:296] Setting OutFile to fd 1 ...
	I1127 11:19:57.555754  146634 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1127 11:19:57.555763  146634 out.go:309] Setting ErrFile to fd 2...
	I1127 11:19:57.555767  146634 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1127 11:19:57.555943  146634 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17644-122411/.minikube/bin
	I1127 11:19:57.556099  146634 out.go:303] Setting JSON to false
	I1127 11:19:57.556127  146634 mustload.go:65] Loading cluster: multinode-397554
	I1127 11:19:57.556230  146634 notify.go:220] Checking for updates...
	I1127 11:19:57.556504  146634 config.go:182] Loaded profile config "multinode-397554": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1127 11:19:57.556521  146634 status.go:255] checking status of multinode-397554 ...
	I1127 11:19:57.557151  146634 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1127 11:19:57.557247  146634 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 11:19:57.571453  146634 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43269
	I1127 11:19:57.571840  146634 main.go:141] libmachine: () Calling .GetVersion
	I1127 11:19:57.572386  146634 main.go:141] libmachine: Using API Version  1
	I1127 11:19:57.572413  146634 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 11:19:57.572737  146634 main.go:141] libmachine: () Calling .GetMachineName
	I1127 11:19:57.572906  146634 main.go:141] libmachine: (multinode-397554) Calling .GetState
	I1127 11:19:57.574397  146634 status.go:330] multinode-397554 host status = "Stopped" (err=<nil>)
	I1127 11:19:57.574411  146634 status.go:343] host is not running, skipping remaining checks
	I1127 11:19:57.574415  146634 status.go:257] multinode-397554 status: &{Name:multinode-397554 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1127 11:19:57.574430  146634 status.go:255] checking status of multinode-397554-m02 ...
	I1127 11:19:57.574751  146634 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1127 11:19:57.574797  146634 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1127 11:19:57.588719  146634 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45371
	I1127 11:19:57.589112  146634 main.go:141] libmachine: () Calling .GetVersion
	I1127 11:19:57.589591  146634 main.go:141] libmachine: Using API Version  1
	I1127 11:19:57.589628  146634 main.go:141] libmachine: () Calling .SetConfigRaw
	I1127 11:19:57.589929  146634 main.go:141] libmachine: () Calling .GetMachineName
	I1127 11:19:57.590110  146634 main.go:141] libmachine: (multinode-397554-m02) Calling .GetState
	I1127 11:19:57.591584  146634 status.go:330] multinode-397554-m02 host status = "Stopped" (err=<nil>)
	I1127 11:19:57.591598  146634 status.go:343] host is not running, skipping remaining checks
	I1127 11:19:57.591609  146634 status.go:257] multinode-397554-m02 status: &{Name:multinode-397554-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (25.64s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (119.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:354: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-397554 --wait=true -v=8 --alsologtostderr --driver=kvm2 
multinode_test.go:354: (dbg) Done: out/minikube-linux-amd64 start -p multinode-397554 --wait=true -v=8 --alsologtostderr --driver=kvm2 : (1m59.178980212s)
multinode_test.go:360: (dbg) Run:  out/minikube-linux-amd64 -p multinode-397554 status --alsologtostderr
multinode_test.go:374: (dbg) Run:  kubectl get nodes
multinode_test.go:382: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (119.74s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (53.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:443: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-397554
multinode_test.go:452: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-397554-m02 --driver=kvm2 
multinode_test.go:452: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-397554-m02 --driver=kvm2 : exit status 14 (78.937161ms)

                                                
                                                
-- stdout --
	* [multinode-397554-m02] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17644
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17644-122411/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17644-122411/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-397554-m02' is duplicated with machine name 'multinode-397554-m02' in profile 'multinode-397554'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:460: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-397554-m03 --driver=kvm2 
multinode_test.go:460: (dbg) Done: out/minikube-linux-amd64 start -p multinode-397554-m03 --driver=kvm2 : (51.96565017s)
multinode_test.go:467: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-397554
multinode_test.go:467: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-397554: exit status 80 (231.327735ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-397554
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-397554-m03 already exists in multinode-397554-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-397554-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (53.33s)

                                                
                                    
x
+
TestPreload (198.75s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-373868 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.24.4
E1127 11:23:04.045215  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/addons-097795/client.crt: no such file or directory
E1127 11:24:14.478951  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/functional-397013/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-373868 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.24.4: (1m25.513085681s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-373868 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-373868 image pull gcr.io/k8s-minikube/busybox: (1.259187719s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-373868
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-373868: (13.114394573s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-373868 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2 
E1127 11:25:37.522694  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/functional-397013/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-373868 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2 : (1m37.595172537s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-373868 image list
helpers_test.go:175: Cleaning up "test-preload-373868" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-373868
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-373868: (1.027067438s)
--- PASS: TestPreload (198.75s)

                                                
                                    
x
+
TestScheduledStopUnix (123.87s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-359913 --memory=2048 --driver=kvm2 
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-359913 --memory=2048 --driver=kvm2 : (52.04348607s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-359913 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-359913 -n scheduled-stop-359913
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-359913 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-359913 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-359913 -n scheduled-stop-359913
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-359913
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-359913 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E1127 11:27:51.958585  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/ingress-addon-legacy-968829/client.crt: no such file or directory
E1127 11:28:04.045905  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/addons-097795/client.crt: no such file or directory
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-359913
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-359913: exit status 7 (76.048714ms)

                                                
                                                
-- stdout --
	scheduled-stop-359913
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-359913 -n scheduled-stop-359913
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-359913 -n scheduled-stop-359913: exit status 7 (79.795882ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-359913" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-359913
--- PASS: TestScheduledStopUnix (123.87s)

                                                
                                    
x
+
TestSkaffold (138.95s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe4176329681 version
skaffold_test.go:63: skaffold version: v2.9.0
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p skaffold-861907 --memory=2600 --driver=kvm2 
skaffold_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p skaffold-861907 --memory=2600 --driver=kvm2 : (50.790970083s)
skaffold_test.go:86: copying out/minikube-linux-amd64 to /home/jenkins/workspace/KVM_Linux_integration/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe4176329681 run --minikube-profile skaffold-861907 --kube-context skaffold-861907 --status-check=true --port-forward=false --interactive=false
E1127 11:29:14.478964  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/functional-397013/client.crt: no such file or directory
E1127 11:29:15.002652  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/ingress-addon-legacy-968829/client.crt: no such file or directory
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe4176329681 run --minikube-profile skaffold-861907 --kube-context skaffold-861907 --status-check=true --port-forward=false --interactive=false: (1m16.294596499s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-69cbdf9f9c-5bdbp" [b0fa11e9-df67-4c8c-89ef-753d114e80a7] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 5.018795849s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-7f5f5cfff7-845hq" [7dd7e8b5-613b-4ddf-a95a-eed3cc7af49e] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.009935734s
helpers_test.go:175: Cleaning up "skaffold-861907" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p skaffold-861907
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p skaffold-861907: (1.196340768s)
--- PASS: TestSkaffold (138.95s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (193.52s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:133: (dbg) Run:  /tmp/minikube-v1.6.2.2005741417.exe start -p running-upgrade-986275 --memory=2200 --vm-driver=kvm2 
E1127 11:34:14.479503  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/functional-397013/client.crt: no such file or directory
version_upgrade_test.go:133: (dbg) Done: /tmp/minikube-v1.6.2.2005741417.exe start -p running-upgrade-986275 --memory=2200 --vm-driver=kvm2 : (2m10.169638637s)
version_upgrade_test.go:143: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-986275 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 
E1127 11:36:07.094848  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/addons-097795/client.crt: no such file or directory
version_upgrade_test.go:143: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-986275 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 : (1m1.291955268s)
helpers_test.go:175: Cleaning up "running-upgrade-986275" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-986275
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-986275: (1.733006855s)
--- PASS: TestRunningBinaryUpgrade (193.52s)

                                                
                                    
x
+
TestKubernetesUpgrade (165.32s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:235: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-011152 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:235: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-011152 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2 : (1m18.626850689s)
version_upgrade_test.go:240: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-011152
E1127 11:35:43.015554  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/skaffold-861907/client.crt: no such file or directory
version_upgrade_test.go:240: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-011152: (4.50601514s)
version_upgrade_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-011152 status --format={{.Host}}
version_upgrade_test.go:245: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-011152 status --format={{.Host}}: exit status 7 (90.910371ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:247: status error: exit status 7 (may be ok)
version_upgrade_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-011152 --memory=2200 --kubernetes-version=v1.28.4 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-011152 --memory=2200 --kubernetes-version=v1.28.4 --alsologtostderr -v=1 --driver=kvm2 : (49.088384735s)
version_upgrade_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-011152 version --output=json
version_upgrade_test.go:280: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:282: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-011152 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2 
version_upgrade_test.go:282: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-011152 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2 : exit status 106 (140.722214ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-011152] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17644
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17644-122411/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17644-122411/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.28.4 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-011152
	    minikube start -p kubernetes-upgrade-011152 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-0111522 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.28.4, by running:
	    
	    minikube start -p kubernetes-upgrade-011152 --kubernetes-version=v1.28.4
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:286: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:288: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-011152 --memory=2200 --kubernetes-version=v1.28.4 --alsologtostderr -v=1 --driver=kvm2 
E1127 11:36:44.457278  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/skaffold-861907/client.crt: no such file or directory
version_upgrade_test.go:288: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-011152 --memory=2200 --kubernetes-version=v1.28.4 --alsologtostderr -v=1 --driver=kvm2 : (31.658273158s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-011152" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-011152
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-011152: (1.117157979s)
--- PASS: TestKubernetesUpgrade (165.32s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.34s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.34s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (204.5s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:196: (dbg) Run:  /tmp/minikube-v1.6.2.3864162520.exe start -p stopped-upgrade-625614 --memory=2200 --vm-driver=kvm2 
E1127 11:36:03.496269  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/skaffold-861907/client.crt: no such file or directory
version_upgrade_test.go:196: (dbg) Done: /tmp/minikube-v1.6.2.3864162520.exe start -p stopped-upgrade-625614 --memory=2200 --vm-driver=kvm2 : (1m52.122481211s)
version_upgrade_test.go:205: (dbg) Run:  /tmp/minikube-v1.6.2.3864162520.exe -p stopped-upgrade-625614 stop
E1127 11:38:04.045351  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/addons-097795/client.crt: no such file or directory
E1127 11:38:06.377824  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/skaffold-861907/client.crt: no such file or directory
E1127 11:38:06.965065  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/gvisor-691176/client.crt: no such file or directory
E1127 11:38:06.970435  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/gvisor-691176/client.crt: no such file or directory
E1127 11:38:06.980743  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/gvisor-691176/client.crt: no such file or directory
E1127 11:38:07.001056  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/gvisor-691176/client.crt: no such file or directory
E1127 11:38:07.041256  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/gvisor-691176/client.crt: no such file or directory
E1127 11:38:07.121605  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/gvisor-691176/client.crt: no such file or directory
E1127 11:38:07.282015  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/gvisor-691176/client.crt: no such file or directory
E1127 11:38:07.602616  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/gvisor-691176/client.crt: no such file or directory
E1127 11:38:08.243073  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/gvisor-691176/client.crt: no such file or directory
version_upgrade_test.go:205: (dbg) Done: /tmp/minikube-v1.6.2.3864162520.exe -p stopped-upgrade-625614 stop: (13.674436203s)
version_upgrade_test.go:211: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-625614 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 
E1127 11:38:09.523515  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/gvisor-691176/client.crt: no such file or directory
E1127 11:38:12.084014  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/gvisor-691176/client.crt: no such file or directory
version_upgrade_test.go:211: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-625614 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 : (1m18.704655923s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (204.50s)

                                                
                                    
x
+
TestPause/serial/Start (75.21s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-247271 --memory=2048 --install-addons=false --wait=all --driver=kvm2 
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-247271 --memory=2048 --install-addons=false --wait=all --driver=kvm2 : (1m15.212101785s)
--- PASS: TestPause/serial/Start (75.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-277241 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-277241 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2 : exit status 14 (95.859135ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-277241] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17644
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17644-122411/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17644-122411/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (88.51s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-277241 --driver=kvm2 
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-277241 --driver=kvm2 : (1m28.154672196s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-277241 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (88.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (124.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-128714 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2 
E1127 11:37:51.958991  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/ingress-addon-legacy-968829/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-128714 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2 : (2m4.552202417s)
--- PASS: TestNetworkPlugins/group/auto/Start (124.55s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (87.34s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-247271 --alsologtostderr -v=1 --driver=kvm2 
E1127 11:38:17.204548  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/gvisor-691176/client.crt: no such file or directory
E1127 11:38:27.444736  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/gvisor-691176/client.crt: no such file or directory
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-247271 --alsologtostderr -v=1 --driver=kvm2 : (1m27.316313339s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (87.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (43.79s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-277241 --no-kubernetes --driver=kvm2 
E1127 11:38:47.924990  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/gvisor-691176/client.crt: no such file or directory
E1127 11:39:14.479803  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/functional-397013/client.crt: no such file or directory
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-277241 --no-kubernetes --driver=kvm2 : (42.375360563s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-277241 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-277241 status -o json: exit status 2 (309.74553ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-277241","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-277241
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-277241: (1.102317821s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (43.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-128714 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (12.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-128714 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-h2chg" [a1a82b04-9e62-4582-b531-6164afbe3973] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-h2chg" [a1a82b04-9e62-4582-b531-6164afbe3973] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 12.019910396s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (12.37s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (30.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-277241 --no-kubernetes --driver=kvm2 
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-277241 --no-kubernetes --driver=kvm2 : (30.089476835s)
--- PASS: TestNoKubernetes/serial/Start (30.09s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.73s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:219: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-625614
version_upgrade_test.go:219: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-625614: (1.733919852s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.73s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-128714 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-128714 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-128714 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (99.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-128714 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-128714 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2 : (1m39.824549439s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (99.82s)

                                                
                                    
x
+
TestPause/serial/Pause (1s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-247271 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-amd64 pause -p pause-247271 --alsologtostderr -v=5: (1.000166119s)
--- PASS: TestPause/serial/Pause (1.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (133.94s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-128714 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-128714 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2 : (2m13.941091093s)
--- PASS: TestNetworkPlugins/group/calico/Start (133.94s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.27s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-247271 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-247271 --output=json --layout=cluster: exit status 2 (267.273903ms)

                                                
                                                
-- stdout --
	{"Name":"pause-247271","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-247271","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.27s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.59s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-247271 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.59s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.76s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-247271 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.76s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.08s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-247271 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-247271 --alsologtostderr -v=5: (1.082341577s)
--- PASS: TestPause/serial/DeletePaused (1.08s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.73s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.73s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (137.97s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-128714 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-128714 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2 : (2m17.973650633s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (137.97s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-277241 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-277241 "sudo systemctl is-active --quiet service kubelet": exit status 1 (212.150177ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.62s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.62s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-277241
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-277241: (2.236014458s)
--- PASS: TestNoKubernetes/serial/Stop (2.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (106.75s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-277241 --driver=kvm2 
E1127 11:40:22.534170  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/skaffold-861907/client.crt: no such file or directory
E1127 11:40:50.218566  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/skaffold-861907/client.crt: no such file or directory
E1127 11:40:50.806831  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/gvisor-691176/client.crt: no such file or directory
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-277241 --driver=kvm2 : (1m46.749509259s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (106.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-h245b" [bf3826eb-baa4-412d-a278-dd7853018bf4] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.022909692s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-128714 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (15.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-128714 replace --force -f testdata/netcat-deployment.yaml
net_test.go:149: (dbg) Done: kubectl --context kindnet-128714 replace --force -f testdata/netcat-deployment.yaml: (1.563662758s)
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-dgzm5" [a0cc0336-5836-4a7d-9e0b-51167ae3b595] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-dgzm5" [a0cc0336-5836-4a7d-9e0b-51167ae3b595] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 13.024687244s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (15.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-128714 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-128714 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-128714 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.18s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-277241 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-277241 "sudo systemctl is-active --quiet service kubelet": exit status 1 (240.309551ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (76.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p false-128714 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p false-128714 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=kvm2 : (1m16.630797115s)
--- PASS: TestNetworkPlugins/group/false/Start (76.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (97.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-128714 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-128714 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2 : (1m37.79276382s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (97.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-v2ss2" [2c8e2ccd-5502-471b-a0e8-89c5132db1c7] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.037582334s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-128714 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (13.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-128714 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-lnkrg" [daa4e6fd-5414-4c02-987d-fecacf1fff4f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-lnkrg" [daa4e6fd-5414-4c02-987d-fecacf1fff4f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 13.020688908s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (13.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-128714 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (12.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-128714 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-zsclk" [fffa66fb-1ed6-4f12-832c-8a4d56734846] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-zsclk" [fffa66fb-1ed6-4f12-832c-8a4d56734846] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 12.021021654s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (12.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-128714 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-128714 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-128714 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-128714 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-128714 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-128714 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (96.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-128714 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-128714 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2 : (1m36.549296053s)
--- PASS: TestNetworkPlugins/group/flannel/Start (96.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (116.99s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-128714 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2 
E1127 11:42:51.956399  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/ingress-addon-legacy-968829/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-128714 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2 : (1m56.99241736s)
--- PASS: TestNetworkPlugins/group/bridge/Start (116.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p false-128714 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (11.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-128714 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-ct6vc" [b5516c61-79a9-4f42-96d0-6627f7a78a09] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-ct6vc" [b5516c61-79a9-4f42-96d0-6627f7a78a09] Running
E1127 11:43:04.045205  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/addons-097795/client.crt: no such file or directory
E1127 11:43:06.964814  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/gvisor-691176/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 11.010414724s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (11.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-128714 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-128714 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-128714 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (99.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kubenet-128714 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kubenet-128714 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=kvm2 : (1m39.851750223s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (99.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-128714 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-128714 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-727bm" [845d20ad-64cd-4035-a702-7e14ad1ce00d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1127 11:43:34.647272  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/gvisor-691176/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-727bm" [845d20ad-64cd-4035-a702-7e14ad1ce00d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.009674891s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-128714 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-128714 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-128714 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (166.41s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-337707 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-337707 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.16.0: (2m46.412967371s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (166.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-rvh59" [55112a29-68b5-436b-8a4f-bfe8015bbf81] Running
E1127 11:44:14.478811  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/functional-397013/client.crt: no such file or directory
E1127 11:44:15.208618  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/auto-128714/client.crt: no such file or directory
E1127 11:44:15.214055  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/auto-128714/client.crt: no such file or directory
E1127 11:44:15.226430  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/auto-128714/client.crt: no such file or directory
E1127 11:44:15.246743  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/auto-128714/client.crt: no such file or directory
E1127 11:44:15.287851  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/auto-128714/client.crt: no such file or directory
E1127 11:44:15.368212  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/auto-128714/client.crt: no such file or directory
E1127 11:44:15.529127  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/auto-128714/client.crt: no such file or directory
E1127 11:44:15.849754  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/auto-128714/client.crt: no such file or directory
E1127 11:44:16.490379  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/auto-128714/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.018818943s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-128714 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (14.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-128714 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-dfcjl" [885bea40-b570-4026-9938-7f159e0e7929] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1127 11:44:17.771571  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/auto-128714/client.crt: no such file or directory
E1127 11:44:20.331919  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/auto-128714/client.crt: no such file or directory
E1127 11:44:25.453076  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/auto-128714/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-dfcjl" [885bea40-b570-4026-9938-7f159e0e7929] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 14.021759227s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (14.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-128714 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-128714 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-128714 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-128714 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (12.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-128714 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-lx75v" [d990d615-5390-47c5-a643-91dc420ae673] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1127 11:44:35.693758  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/auto-128714/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-lx75v" [d990d615-5390-47c5-a643-91dc420ae673] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 12.012045757s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (12.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-128714 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-128714 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-128714 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (95.44s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-822966 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.28.4
E1127 11:44:56.174779  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/auto-128714/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-822966 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.28.4: (1m35.436772237s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (95.44s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (99.74s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-700864 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.28.4
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-700864 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.28.4: (1m39.743121098s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (99.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kubenet-128714 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (12.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-128714 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-6jllh" [eb1d1e26-ad3f-4c1f-8299-89451ef1d24f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-6jllh" [eb1d1e26-ad3f-4c1f-8299-89451ef1d24f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 12.011847515s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (12.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-128714 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-128714 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-128714 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.16s)
E1127 11:50:49.220761  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/kubenet-128714/client.crt: no such file or directory
E1127 11:50:57.436675  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/bridge-128714/client.crt: no such file or directory
E1127 11:51:10.142602  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/kindnet-128714/client.crt: no such file or directory
E1127 11:51:12.814281  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/enable-default-cni-128714/client.crt: no such file or directory
E1127 11:51:30.181473  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/kubenet-128714/client.crt: no such file or directory
E1127 11:51:37.826713  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/kindnet-128714/client.crt: no such file or directory
E1127 11:51:45.579556  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/skaffold-861907/client.crt: no such file or directory
E1127 11:51:55.860246  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/flannel-128714/client.crt: no such file or directory
E1127 11:51:57.698666  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/calico-128714/client.crt: no such file or directory
E1127 11:52:06.317878  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/custom-flannel-128714/client.crt: no such file or directory
E1127 11:52:19.357765  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/bridge-128714/client.crt: no such file or directory
E1127 11:52:25.383066  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/calico-128714/client.crt: no such file or directory

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (92.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-028212 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.28.4
E1127 11:45:55.003809  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/ingress-addon-legacy-968829/client.crt: no such file or directory
E1127 11:46:10.142466  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/kindnet-128714/client.crt: no such file or directory
E1127 11:46:10.147763  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/kindnet-128714/client.crt: no such file or directory
E1127 11:46:10.158049  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/kindnet-128714/client.crt: no such file or directory
E1127 11:46:10.178365  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/kindnet-128714/client.crt: no such file or directory
E1127 11:46:10.219173  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/kindnet-128714/client.crt: no such file or directory
E1127 11:46:10.299596  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/kindnet-128714/client.crt: no such file or directory
E1127 11:46:10.459930  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/kindnet-128714/client.crt: no such file or directory
E1127 11:46:10.780693  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/kindnet-128714/client.crt: no such file or directory
E1127 11:46:11.421877  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/kindnet-128714/client.crt: no such file or directory
E1127 11:46:12.703041  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/kindnet-128714/client.crt: no such file or directory
E1127 11:46:15.263528  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/kindnet-128714/client.crt: no such file or directory
E1127 11:46:20.383955  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/kindnet-128714/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-028212 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.28.4: (1m32.067501562s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (92.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.53s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-822966 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [3384300b-10a3-4872-85de-d4b595a50185] Pending
helpers_test.go:344: "busybox" [3384300b-10a3-4872-85de-d4b595a50185] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [3384300b-10a3-4872-85de-d4b595a50185] Running
E1127 11:46:30.624413  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/kindnet-128714/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.039125351s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-822966 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.53s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.31s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-822966 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-822966 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.208355419s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-822966 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.31s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (13.17s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-822966 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-822966 --alsologtostderr -v=3: (13.165905619s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (13.17s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.52s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-700864 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [a774f145-b4ce-4a83-b171-1c7215bbe997] Pending
helpers_test.go:344: "busybox" [a774f145-b4ce-4a83-b171-1c7215bbe997] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [a774f145-b4ce-4a83-b171-1c7215bbe997] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.035783297s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-700864 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.52s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.54s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-337707 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [8826a461-459f-4665-b183-50b572908897] Pending
helpers_test.go:344: "busybox" [8826a461-459f-4665-b183-50b572908897] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [8826a461-459f-4665-b183-50b572908897] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.036533573s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-337707 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.54s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-822966 -n no-preload-822966
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-822966 -n no-preload-822966: exit status 7 (104.718334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-822966 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (335.88s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-822966 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.28.4
E1127 11:46:51.104919  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/kindnet-128714/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-822966 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.28.4: (5m35.583711081s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-822966 -n no-preload-822966
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (335.88s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.92s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-337707 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-337707 describe deploy/metrics-server -n kube-system
E1127 11:46:58.016465  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/calico-128714/client.crt: no such file or directory
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.92s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.39s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-700864 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1127 11:46:57.698683  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/calico-128714/client.crt: no such file or directory
E1127 11:46:57.704000  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/calico-128714/client.crt: no such file or directory
E1127 11:46:57.714306  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/calico-128714/client.crt: no such file or directory
E1127 11:46:57.734630  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/calico-128714/client.crt: no such file or directory
E1127 11:46:57.775565  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/calico-128714/client.crt: no such file or directory
E1127 11:46:57.855992  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/calico-128714/client.crt: no such file or directory
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-700864 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.285630145s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-700864 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.39s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (13.15s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-337707 --alsologtostderr -v=3
E1127 11:46:58.336680  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/calico-128714/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-337707 --alsologtostderr -v=3: (13.149741483s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (13.15s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (13.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-700864 --alsologtostderr -v=3
E1127 11:46:58.978264  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/calico-128714/client.crt: no such file or directory
E1127 11:46:59.059325  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/auto-128714/client.crt: no such file or directory
E1127 11:47:00.258999  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/calico-128714/client.crt: no such file or directory
E1127 11:47:02.820032  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/calico-128714/client.crt: no such file or directory
E1127 11:47:06.318241  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/custom-flannel-128714/client.crt: no such file or directory
E1127 11:47:06.323496  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/custom-flannel-128714/client.crt: no such file or directory
E1127 11:47:06.333770  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/custom-flannel-128714/client.crt: no such file or directory
E1127 11:47:06.354024  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/custom-flannel-128714/client.crt: no such file or directory
E1127 11:47:06.394269  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/custom-flannel-128714/client.crt: no such file or directory
E1127 11:47:06.474583  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/custom-flannel-128714/client.crt: no such file or directory
E1127 11:47:06.634960  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/custom-flannel-128714/client.crt: no such file or directory
E1127 11:47:06.955521  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/custom-flannel-128714/client.crt: no such file or directory
E1127 11:47:07.596321  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/custom-flannel-128714/client.crt: no such file or directory
E1127 11:47:07.940337  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/calico-128714/client.crt: no such file or directory
E1127 11:47:08.876475  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/custom-flannel-128714/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-700864 --alsologtostderr -v=3: (13.222834173s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (13.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-337707 -n old-k8s-version-337707
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-337707 -n old-k8s-version-337707: exit status 7 (88.733573ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-337707 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (459.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-337707 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.16.0
E1127 11:47:11.437413  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/custom-flannel-128714/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-337707 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.16.0: (7m38.840710364s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-337707 -n old-k8s-version-337707
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (459.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-700864 -n embed-certs-700864
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-700864 -n embed-certs-700864: exit status 7 (92.490232ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-700864 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.51s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-028212 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [4137d58f-bcad-4979-aaac-65a856b15dbf] Pending
helpers_test.go:344: "busybox" [4137d58f-bcad-4979-aaac-65a856b15dbf] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [4137d58f-bcad-4979-aaac-65a856b15dbf] Running
E1127 11:47:16.558095  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/custom-flannel-128714/client.crt: no such file or directory
E1127 11:47:18.181213  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/calico-128714/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.03223501s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-028212 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.51s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.17s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-028212 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-028212 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.084002206s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-028212 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (13.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-028212 --alsologtostderr -v=3
E1127 11:47:26.798908  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/custom-flannel-128714/client.crt: no such file or directory
E1127 11:47:32.065239  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/kindnet-128714/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-028212 --alsologtostderr -v=3: (13.133534134s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (13.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-028212 -n default-k8s-diff-port-028212
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-028212 -n default-k8s-diff-port-028212: exit status 7 (84.637058ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-028212 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (338.66s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-028212 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.28.4
E1127 11:47:38.661702  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/calico-128714/client.crt: no such file or directory
E1127 11:47:47.279281  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/custom-flannel-128714/client.crt: no such file or directory
E1127 11:47:51.957045  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/ingress-addon-legacy-968829/client.crt: no such file or directory
E1127 11:47:56.304586  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/false-128714/client.crt: no such file or directory
E1127 11:47:56.309795  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/false-128714/client.crt: no such file or directory
E1127 11:47:56.320270  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/false-128714/client.crt: no such file or directory
E1127 11:47:56.340512  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/false-128714/client.crt: no such file or directory
E1127 11:47:56.380760  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/false-128714/client.crt: no such file or directory
E1127 11:47:56.461120  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/false-128714/client.crt: no such file or directory
E1127 11:47:56.621759  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/false-128714/client.crt: no such file or directory
E1127 11:47:56.942293  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/false-128714/client.crt: no such file or directory
E1127 11:47:57.583527  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/false-128714/client.crt: no such file or directory
E1127 11:47:58.864642  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/false-128714/client.crt: no such file or directory
E1127 11:48:01.425545  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/false-128714/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-028212 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.28.4: (5m38.368573142s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-028212 -n default-k8s-diff-port-028212
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (338.66s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (86.6s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-693564 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.28.4
E1127 11:48:16.786439  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/false-128714/client.crt: no such file or directory
E1127 11:48:19.621958  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/calico-128714/client.crt: no such file or directory
E1127 11:48:28.240223  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/custom-flannel-128714/client.crt: no such file or directory
E1127 11:48:28.969606  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/enable-default-cni-128714/client.crt: no such file or directory
E1127 11:48:28.974898  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/enable-default-cni-128714/client.crt: no such file or directory
E1127 11:48:28.985189  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/enable-default-cni-128714/client.crt: no such file or directory
E1127 11:48:29.005493  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/enable-default-cni-128714/client.crt: no such file or directory
E1127 11:48:29.046574  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/enable-default-cni-128714/client.crt: no such file or directory
E1127 11:48:29.127659  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/enable-default-cni-128714/client.crt: no such file or directory
E1127 11:48:29.287866  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/enable-default-cni-128714/client.crt: no such file or directory
E1127 11:48:29.608326  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/enable-default-cni-128714/client.crt: no such file or directory
E1127 11:48:30.248747  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/enable-default-cni-128714/client.crt: no such file or directory
E1127 11:48:31.529408  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/enable-default-cni-128714/client.crt: no such file or directory
E1127 11:48:34.090070  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/enable-default-cni-128714/client.crt: no such file or directory
E1127 11:48:37.267509  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/false-128714/client.crt: no such file or directory
E1127 11:48:39.210596  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/enable-default-cni-128714/client.crt: no such file or directory
E1127 11:48:49.451425  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/enable-default-cni-128714/client.crt: no such file or directory
E1127 11:48:53.986471  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/kindnet-128714/client.crt: no such file or directory
E1127 11:49:09.932542  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/enable-default-cni-128714/client.crt: no such file or directory
E1127 11:49:12.016643  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/flannel-128714/client.crt: no such file or directory
E1127 11:49:12.021977  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/flannel-128714/client.crt: no such file or directory
E1127 11:49:12.032287  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/flannel-128714/client.crt: no such file or directory
E1127 11:49:12.052635  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/flannel-128714/client.crt: no such file or directory
E1127 11:49:12.093109  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/flannel-128714/client.crt: no such file or directory
E1127 11:49:12.173515  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/flannel-128714/client.crt: no such file or directory
E1127 11:49:12.334068  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/flannel-128714/client.crt: no such file or directory
E1127 11:49:12.655051  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/flannel-128714/client.crt: no such file or directory
E1127 11:49:13.296016  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/flannel-128714/client.crt: no such file or directory
E1127 11:49:14.479325  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/functional-397013/client.crt: no such file or directory
E1127 11:49:14.576932  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/flannel-128714/client.crt: no such file or directory
E1127 11:49:15.208220  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/auto-128714/client.crt: no such file or directory
E1127 11:49:17.137377  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/flannel-128714/client.crt: no such file or directory
E1127 11:49:18.228477  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/false-128714/client.crt: no such file or directory
E1127 11:49:22.258626  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/flannel-128714/client.crt: no such file or directory
E1127 11:49:32.499027  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/flannel-128714/client.crt: no such file or directory
E1127 11:49:35.513315  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/bridge-128714/client.crt: no such file or directory
E1127 11:49:35.518663  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/bridge-128714/client.crt: no such file or directory
E1127 11:49:35.528985  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/bridge-128714/client.crt: no such file or directory
E1127 11:49:35.549310  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/bridge-128714/client.crt: no such file or directory
E1127 11:49:35.589622  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/bridge-128714/client.crt: no such file or directory
E1127 11:49:35.669972  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/bridge-128714/client.crt: no such file or directory
E1127 11:49:35.830621  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/bridge-128714/client.crt: no such file or directory
E1127 11:49:36.151232  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/bridge-128714/client.crt: no such file or directory
E1127 11:49:36.792400  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/bridge-128714/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-693564 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.28.4: (1m26.598011466s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (86.60s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-693564 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1127 11:49:38.073416  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/bridge-128714/client.crt: no such file or directory
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-693564 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.07080051s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (8.13s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-693564 --alsologtostderr -v=3
E1127 11:49:40.634243  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/bridge-128714/client.crt: no such file or directory
E1127 11:49:41.542853  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/calico-128714/client.crt: no such file or directory
E1127 11:49:42.899639  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/auto-128714/client.crt: no such file or directory
E1127 11:49:45.754861  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/bridge-128714/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-693564 --alsologtostderr -v=3: (8.126492767s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (8.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-693564 -n newest-cni-693564
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-693564 -n newest-cni-693564: exit status 7 (85.951753ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-693564 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (49.16s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-693564 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.28.4
E1127 11:49:50.161045  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/custom-flannel-128714/client.crt: no such file or directory
E1127 11:49:50.893322  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/enable-default-cni-128714/client.crt: no such file or directory
E1127 11:49:52.979301  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/flannel-128714/client.crt: no such file or directory
E1127 11:49:55.995088  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/bridge-128714/client.crt: no such file or directory
E1127 11:50:08.259312  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/kubenet-128714/client.crt: no such file or directory
E1127 11:50:08.264604  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/kubenet-128714/client.crt: no such file or directory
E1127 11:50:08.274908  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/kubenet-128714/client.crt: no such file or directory
E1127 11:50:08.295237  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/kubenet-128714/client.crt: no such file or directory
E1127 11:50:08.335544  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/kubenet-128714/client.crt: no such file or directory
E1127 11:50:08.415917  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/kubenet-128714/client.crt: no such file or directory
E1127 11:50:08.576407  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/kubenet-128714/client.crt: no such file or directory
E1127 11:50:08.896887  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/kubenet-128714/client.crt: no such file or directory
E1127 11:50:09.537223  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/kubenet-128714/client.crt: no such file or directory
E1127 11:50:10.817765  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/kubenet-128714/client.crt: no such file or directory
E1127 11:50:13.378264  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/kubenet-128714/client.crt: no such file or directory
E1127 11:50:16.476295  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/bridge-128714/client.crt: no such file or directory
E1127 11:50:18.499012  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/kubenet-128714/client.crt: no such file or directory
E1127 11:50:22.534676  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/skaffold-861907/client.crt: no such file or directory
E1127 11:50:28.739691  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/kubenet-128714/client.crt: no such file or directory
E1127 11:50:33.939492  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/flannel-128714/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-693564 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.28.4: (48.87600157s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-693564 -n newest-cni-693564
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (49.16s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p newest-cni-693564 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.49s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-693564 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-693564 -n newest-cni-693564
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-693564 -n newest-cni-693564: exit status 2 (278.979867ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-693564 -n newest-cni-693564
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-693564 -n newest-cni-693564: exit status 2 (268.03619ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-693564 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-693564 -n newest-cni-693564
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-693564 -n newest-cni-693564
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.49s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (16.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-555jf" [82095082-1f83-43b4-b06f-102d771a6545] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E1127 11:52:34.001911  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/custom-flannel-128714/client.crt: no such file or directory
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-555jf" [82095082-1f83-43b4-b06f-102d771a6545] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 16.024516046s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (16.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-555jf" [82095082-1f83-43b4-b06f-102d771a6545] Running
E1127 11:52:47.095364  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/addons-097795/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.011353439s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-822966 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p no-preload-822966 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.55s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-822966 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-822966 -n no-preload-822966
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-822966 -n no-preload-822966: exit status 2 (270.178556ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-822966 -n no-preload-822966
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-822966 -n no-preload-822966: exit status 2 (259.084304ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-822966 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-822966 -n no-preload-822966
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-822966 -n no-preload-822966
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.55s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (5.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-dmlr8" [db82c0dd-09f5-4617-ab58-bbd258fb8165] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.03103399s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (5.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-dmlr8" [db82c0dd-09f5-4617-ab58-bbd258fb8165] Running
E1127 11:53:23.989802  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/false-128714/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.01201085s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-028212 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p default-k8s-diff-port-028212 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.61s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-028212 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-028212 -n default-k8s-diff-port-028212
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-028212 -n default-k8s-diff-port-028212: exit status 2 (257.074501ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-028212 -n default-k8s-diff-port-028212
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-028212 -n default-k8s-diff-port-028212: exit status 2 (264.365922ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-028212 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-028212 -n default-k8s-diff-port-028212
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-028212 -n default-k8s-diff-port-028212
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.61s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-wz5zn" [43213743-ecae-4e1e-855d-0d784743a492] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.018955877s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-wz5zn" [43213743-ecae-4e1e-855d-0d784743a492] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.010744036s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-337707 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.42s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-337707 --alsologtostderr -v=1
E1127 11:55:03.198550  129653 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17644-122411/.minikube/profiles/bridge-128714/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-337707 -n old-k8s-version-337707
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-337707 -n old-k8s-version-337707: exit status 2 (257.093207ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-337707 -n old-k8s-version-337707
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-337707 -n old-k8s-version-337707: exit status 2 (251.257808ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-337707 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-337707 -n old-k8s-version-337707
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-337707 -n old-k8s-version-337707
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.42s)

                                                
                                    

Test skip (26/322)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:213: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:497: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:297: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:523: 
----------------------- debugLogs start: cilium-128714 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-128714

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-128714

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-128714

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-128714

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-128714

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-128714

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-128714

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-128714

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-128714

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-128714

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-128714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-128714"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-128714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-128714"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-128714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-128714"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-128714

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-128714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-128714"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-128714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-128714"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-128714" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-128714" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-128714" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-128714" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-128714" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-128714" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-128714" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-128714" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-128714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-128714"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-128714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-128714"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-128714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-128714"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-128714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-128714"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-128714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-128714"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-128714

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-128714

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-128714" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-128714" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-128714

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-128714

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-128714" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-128714" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-128714" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-128714" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-128714" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-128714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-128714"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-128714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-128714"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-128714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-128714"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-128714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-128714"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-128714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-128714"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-128714

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-128714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-128714"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-128714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-128714"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-128714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-128714"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-128714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-128714"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-128714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-128714"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-128714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-128714"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-128714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-128714"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-128714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-128714"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-128714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-128714"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-128714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-128714"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-128714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-128714"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-128714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-128714"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-128714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-128714"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-128714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-128714"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-128714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-128714"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-128714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-128714"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-128714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-128714"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-128714" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-128714"

                                                
                                                
----------------------- debugLogs end: cilium-128714 [took: 3.963372881s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-128714" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-128714
--- SKIP: TestNetworkPlugins/group/cilium (4.11s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-665384" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-665384
--- SKIP: TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                    
Copied to clipboard