Test Report: KVM_Linux 17485

                    
                      8dc642b39e51c59087e6696ac1afe8c1c527ee77:2023-10-24:31589
                    
                

Test fail (7/321)

x
+
TestStoppedBinaryUpgrade/Upgrade (174.19s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:196: (dbg) Run:  /tmp/minikube-v1.6.2.1082144715.exe start -p stopped-upgrade-130425 --memory=2200 --vm-driver=kvm2 
version_upgrade_test.go:196: (dbg) Done: /tmp/minikube-v1.6.2.1082144715.exe start -p stopped-upgrade-130425 --memory=2200 --vm-driver=kvm2 : (1m47.081763071s)
version_upgrade_test.go:205: (dbg) Run:  /tmp/minikube-v1.6.2.1082144715.exe -p stopped-upgrade-130425 stop
version_upgrade_test.go:205: (dbg) Done: /tmp/minikube-v1.6.2.1082144715.exe -p stopped-upgrade-130425 stop: (13.082615662s)
version_upgrade_test.go:211: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-130425 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:211: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p stopped-upgrade-130425 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 : exit status 90 (54.004172966s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-130425] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17485
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17485-9104/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17485-9104/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.3
	* Using the kvm2 driver based on existing profile
	* Starting control plane node stopped-upgrade-130425 in cluster stopped-upgrade-130425
	* Restarting existing kvm2 VM for "stopped-upgrade-130425" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1024 19:38:20.914949   39712 out.go:296] Setting OutFile to fd 1 ...
	I1024 19:38:20.915047   39712 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 19:38:20.915051   39712 out.go:309] Setting ErrFile to fd 2...
	I1024 19:38:20.915056   39712 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 19:38:20.915253   39712 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17485-9104/.minikube/bin
	I1024 19:38:20.916240   39712 out.go:303] Setting JSON to false
	I1024 19:38:20.917124   39712 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":4599,"bootTime":1698171702,"procs":234,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1024 19:38:20.917186   39712 start.go:138] virtualization: kvm guest
	I1024 19:38:20.919424   39712 out.go:177] * [stopped-upgrade-130425] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I1024 19:38:20.921125   39712 out.go:177]   - MINIKUBE_LOCATION=17485
	I1024 19:38:20.921187   39712 notify.go:220] Checking for updates...
	I1024 19:38:20.924104   39712 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1024 19:38:20.925566   39712 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17485-9104/kubeconfig
	I1024 19:38:20.926981   39712 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17485-9104/.minikube
	I1024 19:38:20.928351   39712 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1024 19:38:20.929881   39712 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1024 19:38:20.931720   39712 config.go:182] Loaded profile config "stopped-upgrade-130425": Driver=, ContainerRuntime=docker, KubernetesVersion=v1.17.0
	I1024 19:38:20.931737   39712 start_flags.go:689] config upgrade: Driver=kvm2
	I1024 19:38:20.931745   39712 start_flags.go:701] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883
	I1024 19:38:20.931825   39712 profile.go:148] Saving config to /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/stopped-upgrade-130425/config.json ...
	I1024 19:38:20.932416   39712 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1024 19:38:20.932471   39712 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:38:20.946656   39712 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43843
	I1024 19:38:20.947196   39712 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:38:20.947902   39712 main.go:141] libmachine: Using API Version  1
	I1024 19:38:20.947935   39712 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:38:20.948293   39712 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:38:20.948483   39712 main.go:141] libmachine: (stopped-upgrade-130425) Calling .DriverName
	I1024 19:38:20.950599   39712 out.go:177] * Kubernetes 1.28.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.3
	I1024 19:38:20.952191   39712 driver.go:378] Setting default libvirt URI to qemu:///system
	I1024 19:38:20.952489   39712 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1024 19:38:20.952528   39712 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:38:20.969086   39712 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35717
	I1024 19:38:20.969511   39712 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:38:20.970006   39712 main.go:141] libmachine: Using API Version  1
	I1024 19:38:20.970044   39712 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:38:20.970360   39712 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:38:20.970533   39712 main.go:141] libmachine: (stopped-upgrade-130425) Calling .DriverName
	I1024 19:38:21.006721   39712 out.go:177] * Using the kvm2 driver based on existing profile
	I1024 19:38:21.008091   39712 start.go:298] selected driver: kvm2
	I1024 19:38:21.008105   39712 start.go:902] validating driver "kvm2" against &{Name:stopped-upgrade-130425 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver:kvm2 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0 Clust
erName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:192.168.50.34 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAut
hSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1024 19:38:21.008240   39712 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1024 19:38:21.009137   39712 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1024 19:38:21.009214   39712 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17485-9104/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1024 19:38:21.023769   39712 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I1024 19:38:21.024119   39712 cni.go:84] Creating CNI manager for ""
	I1024 19:38:21.024144   39712 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1024 19:38:21.024162   39712 start_flags.go:323] config:
	{Name:stopped-upgrade-130425 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver:kvm2 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0 ClusterName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:192.168.50.34 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1024 19:38:21.024362   39712 iso.go:125] acquiring lock: {Name:mkf528b771f12bbaddd502db30db0ccdeec4a711 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1024 19:38:21.026143   39712 out.go:177] * Starting control plane node stopped-upgrade-130425 in cluster stopped-upgrade-130425
	I1024 19:38:21.027454   39712 preload.go:132] Checking if preload exists for k8s version v1.17.0 and runtime docker
	W1024 19:38:21.058373   39712 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.17.0/preloaded-images-k8s-v18-v1.17.0-docker-overlay2-amd64.tar.lz4 status code: 404
	I1024 19:38:21.058540   39712 profile.go:148] Saving config to /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/stopped-upgrade-130425/config.json ...
	I1024 19:38:21.058644   39712 cache.go:107] acquiring lock: {Name:mk02c52a760c6c2d81204f850bf40a26731a7b81 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1024 19:38:21.058648   39712 cache.go:107] acquiring lock: {Name:mkc050c7bb9b0b25bf365c00b4ab41c8e00dc2b9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1024 19:38:21.058685   39712 cache.go:107] acquiring lock: {Name:mka53407f32db6c6a0c44ac1d4ca4bcab23ed55f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1024 19:38:21.058702   39712 cache.go:107] acquiring lock: {Name:mk576b46b466203f5d538bd664b138567515f762 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1024 19:38:21.058666   39712 cache.go:107] acquiring lock: {Name:mk5e1dfbf98f27faaba4497d59e32168bbc467ed Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1024 19:38:21.058740   39712 cache.go:115] /home/jenkins/minikube-integration/17485-9104/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0 exists
	I1024 19:38:21.058740   39712 cache.go:115] /home/jenkins/minikube-integration/17485-9104/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0 exists
	I1024 19:38:21.058752   39712 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.17.0" -> "/home/jenkins/minikube-integration/17485-9104/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0" took 130.284µs
	I1024 19:38:21.058750   39712 cache.go:107] acquiring lock: {Name:mkc39e793ff97593f91bc7f6b94f7f94ea529865 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1024 19:38:21.058752   39712 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.17.0" -> "/home/jenkins/minikube-integration/17485-9104/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0" took 68.005µs
	I1024 19:38:21.058768   39712 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.17.0 -> /home/jenkins/minikube-integration/17485-9104/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0 succeeded
	I1024 19:38:21.058648   39712 cache.go:107] acquiring lock: {Name:mk10efdd9164308b235b87ab1a93cf4344ac6cb1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1024 19:38:21.058780   39712 cache.go:107] acquiring lock: {Name:mkdd2e41048b8dd5ae95a7d139604ea3e967a3d9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1024 19:38:21.058806   39712 cache.go:115] /home/jenkins/minikube-integration/17485-9104/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 exists
	I1024 19:38:21.058818   39712 cache.go:96] cache image "registry.k8s.io/pause:3.1" -> "/home/jenkins/minikube-integration/17485-9104/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1" took 168.401µs
	I1024 19:38:21.058830   39712 cache.go:115] /home/jenkins/minikube-integration/17485-9104/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0 exists
	I1024 19:38:21.058836   39712 cache.go:80] save to tar file registry.k8s.io/pause:3.1 -> /home/jenkins/minikube-integration/17485-9104/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 succeeded
	I1024 19:38:21.058836   39712 cache.go:115] /home/jenkins/minikube-integration/17485-9104/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 exists
	I1024 19:38:21.058842   39712 start.go:365] acquiring machines lock for stopped-upgrade-130425: {Name:mkcbabc1952bf564872040e51bac552940a65164 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1024 19:38:21.058856   39712 cache.go:115] /home/jenkins/minikube-integration/17485-9104/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5 exists
	I1024 19:38:21.058865   39712 cache.go:96] cache image "registry.k8s.io/coredns:1.6.5" -> "/home/jenkins/minikube-integration/17485-9104/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5" took 98.859µs
	I1024 19:38:21.058887   39712 cache.go:80] save to tar file registry.k8s.io/coredns:1.6.5 -> /home/jenkins/minikube-integration/17485-9104/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5 succeeded
	I1024 19:38:21.058842   39712 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.17.0" -> "/home/jenkins/minikube-integration/17485-9104/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0" took 227.154µs
	I1024 19:38:21.058898   39712 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.17.0 -> /home/jenkins/minikube-integration/17485-9104/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0 succeeded
	I1024 19:38:21.058797   39712 cache.go:115] /home/jenkins/minikube-integration/17485-9104/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0 exists
	I1024 19:38:21.058913   39712 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.17.0" -> "/home/jenkins/minikube-integration/17485-9104/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0" took 213.301µs
	I1024 19:38:21.058925   39712 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.17.0 -> /home/jenkins/minikube-integration/17485-9104/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0 succeeded
	I1024 19:38:21.058797   39712 cache.go:115] /home/jenkins/minikube-integration/17485-9104/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1024 19:38:21.058939   39712 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17485-9104/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 320.413µs
	I1024 19:38:21.058950   39712 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17485-9104/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1024 19:38:21.058772   39712 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.17.0 -> /home/jenkins/minikube-integration/17485-9104/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0 succeeded
	I1024 19:38:21.058848   39712 cache.go:96] cache image "registry.k8s.io/etcd:3.4.3-0" -> "/home/jenkins/minikube-integration/17485-9104/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0" took 100.791µs
	I1024 19:38:21.058960   39712 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.3-0 -> /home/jenkins/minikube-integration/17485-9104/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 succeeded
	I1024 19:38:21.058967   39712 cache.go:87] Successfully saved all images to host disk.
	I1024 19:38:39.748511   39712 start.go:369] acquired machines lock for "stopped-upgrade-130425" in 18.689613509s
	I1024 19:38:39.748573   39712 start.go:96] Skipping create...Using existing machine configuration
	I1024 19:38:39.748584   39712 fix.go:54] fixHost starting: minikube
	I1024 19:38:39.749082   39712 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1024 19:38:39.749124   39712 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:38:39.769149   39712 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36639
	I1024 19:38:39.769504   39712 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:38:39.769945   39712 main.go:141] libmachine: Using API Version  1
	I1024 19:38:39.769972   39712 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:38:39.770307   39712 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:38:39.770490   39712 main.go:141] libmachine: (stopped-upgrade-130425) Calling .DriverName
	I1024 19:38:39.770624   39712 main.go:141] libmachine: (stopped-upgrade-130425) Calling .GetState
	I1024 19:38:39.772325   39712 fix.go:102] recreateIfNeeded on stopped-upgrade-130425: state=Stopped err=<nil>
	I1024 19:38:39.772371   39712 main.go:141] libmachine: (stopped-upgrade-130425) Calling .DriverName
	W1024 19:38:39.772536   39712 fix.go:128] unexpected machine state, will restart: <nil>
	I1024 19:38:39.774358   39712 out.go:177] * Restarting existing kvm2 VM for "stopped-upgrade-130425" ...
	I1024 19:38:39.775901   39712 main.go:141] libmachine: (stopped-upgrade-130425) Calling .Start
	I1024 19:38:39.776084   39712 main.go:141] libmachine: (stopped-upgrade-130425) Ensuring networks are active...
	I1024 19:38:39.776825   39712 main.go:141] libmachine: (stopped-upgrade-130425) Ensuring network default is active
	I1024 19:38:39.777287   39712 main.go:141] libmachine: (stopped-upgrade-130425) Ensuring network minikube-net is active
	I1024 19:38:39.778193   39712 main.go:141] libmachine: (stopped-upgrade-130425) Getting domain xml...
	I1024 19:38:39.778590   39712 main.go:141] libmachine: (stopped-upgrade-130425) Creating domain...
	I1024 19:38:41.651646   39712 main.go:141] libmachine: (stopped-upgrade-130425) Waiting to get IP...
	I1024 19:38:41.652680   39712 main.go:141] libmachine: (stopped-upgrade-130425) DBG | domain stopped-upgrade-130425 has defined MAC address 52:54:00:22:57:7d in network minikube-net
	I1024 19:38:41.653223   39712 main.go:141] libmachine: (stopped-upgrade-130425) Found IP for machine: 192.168.50.34
	I1024 19:38:41.653243   39712 main.go:141] libmachine: (stopped-upgrade-130425) Reserving static IP address...
	I1024 19:38:41.653305   39712 main.go:141] libmachine: (stopped-upgrade-130425) DBG | domain stopped-upgrade-130425 has current primary IP address 192.168.50.34 and MAC address 52:54:00:22:57:7d in network minikube-net
	I1024 19:38:41.653873   39712 main.go:141] libmachine: (stopped-upgrade-130425) DBG | found host DHCP lease matching {name: "stopped-upgrade-130425", mac: "52:54:00:22:57:7d", ip: "192.168.50.34"} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-10-24 20:36:51 +0000 UTC Type:0 Mac:52:54:00:22:57:7d Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:stopped-upgrade-130425 Clientid:01:52:54:00:22:57:7d}
	I1024 19:38:41.653903   39712 main.go:141] libmachine: (stopped-upgrade-130425) DBG | skip adding static IP to network minikube-net - found existing host DHCP lease matching {name: "stopped-upgrade-130425", mac: "52:54:00:22:57:7d", ip: "192.168.50.34"}
	I1024 19:38:41.653917   39712 main.go:141] libmachine: (stopped-upgrade-130425) Reserved static IP address: 192.168.50.34
	I1024 19:38:41.653932   39712 main.go:141] libmachine: (stopped-upgrade-130425) Waiting for SSH to be available...
	I1024 19:38:41.653944   39712 main.go:141] libmachine: (stopped-upgrade-130425) DBG | Getting to WaitForSSH function...
	I1024 19:38:41.656759   39712 main.go:141] libmachine: (stopped-upgrade-130425) DBG | domain stopped-upgrade-130425 has defined MAC address 52:54:00:22:57:7d in network minikube-net
	I1024 19:38:41.657104   39712 main.go:141] libmachine: (stopped-upgrade-130425) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:57:7d", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-10-24 20:36:51 +0000 UTC Type:0 Mac:52:54:00:22:57:7d Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:stopped-upgrade-130425 Clientid:01:52:54:00:22:57:7d}
	I1024 19:38:41.657138   39712 main.go:141] libmachine: (stopped-upgrade-130425) DBG | domain stopped-upgrade-130425 has defined IP address 192.168.50.34 and MAC address 52:54:00:22:57:7d in network minikube-net
	I1024 19:38:41.657371   39712 main.go:141] libmachine: (stopped-upgrade-130425) DBG | Using SSH client type: external
	I1024 19:38:41.657398   39712 main.go:141] libmachine: (stopped-upgrade-130425) DBG | Using SSH private key: /home/jenkins/minikube-integration/17485-9104/.minikube/machines/stopped-upgrade-130425/id_rsa (-rw-------)
	I1024 19:38:41.657430   39712 main.go:141] libmachine: (stopped-upgrade-130425) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.34 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17485-9104/.minikube/machines/stopped-upgrade-130425/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1024 19:38:41.657443   39712 main.go:141] libmachine: (stopped-upgrade-130425) DBG | About to run SSH command:
	I1024 19:38:41.657455   39712 main.go:141] libmachine: (stopped-upgrade-130425) DBG | exit 0
	I1024 19:38:58.806955   39712 main.go:141] libmachine: (stopped-upgrade-130425) DBG | SSH cmd err, output: exit status 255: 
	I1024 19:38:58.806991   39712 main.go:141] libmachine: (stopped-upgrade-130425) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1024 19:38:58.807004   39712 main.go:141] libmachine: (stopped-upgrade-130425) DBG | command : exit 0
	I1024 19:38:58.807019   39712 main.go:141] libmachine: (stopped-upgrade-130425) DBG | err     : exit status 255
	I1024 19:38:58.807037   39712 main.go:141] libmachine: (stopped-upgrade-130425) DBG | output  : 
	I1024 19:39:01.808794   39712 main.go:141] libmachine: (stopped-upgrade-130425) DBG | Getting to WaitForSSH function...
	I1024 19:39:01.811536   39712 main.go:141] libmachine: (stopped-upgrade-130425) DBG | domain stopped-upgrade-130425 has defined MAC address 52:54:00:22:57:7d in network minikube-net
	I1024 19:39:01.811961   39712 main.go:141] libmachine: (stopped-upgrade-130425) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:57:7d", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-10-24 20:36:51 +0000 UTC Type:0 Mac:52:54:00:22:57:7d Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:stopped-upgrade-130425 Clientid:01:52:54:00:22:57:7d}
	I1024 19:39:01.811997   39712 main.go:141] libmachine: (stopped-upgrade-130425) DBG | domain stopped-upgrade-130425 has defined IP address 192.168.50.34 and MAC address 52:54:00:22:57:7d in network minikube-net
	I1024 19:39:01.812153   39712 main.go:141] libmachine: (stopped-upgrade-130425) DBG | Using SSH client type: external
	I1024 19:39:01.812180   39712 main.go:141] libmachine: (stopped-upgrade-130425) DBG | Using SSH private key: /home/jenkins/minikube-integration/17485-9104/.minikube/machines/stopped-upgrade-130425/id_rsa (-rw-------)
	I1024 19:39:01.812215   39712 main.go:141] libmachine: (stopped-upgrade-130425) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.34 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17485-9104/.minikube/machines/stopped-upgrade-130425/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1024 19:39:01.812237   39712 main.go:141] libmachine: (stopped-upgrade-130425) DBG | About to run SSH command:
	I1024 19:39:01.812256   39712 main.go:141] libmachine: (stopped-upgrade-130425) DBG | exit 0
	I1024 19:39:07.955056   39712 main.go:141] libmachine: (stopped-upgrade-130425) DBG | SSH cmd err, output: exit status 255: 
	I1024 19:39:07.955092   39712 main.go:141] libmachine: (stopped-upgrade-130425) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1024 19:39:07.955107   39712 main.go:141] libmachine: (stopped-upgrade-130425) DBG | command : exit 0
	I1024 19:39:07.955121   39712 main.go:141] libmachine: (stopped-upgrade-130425) DBG | err     : exit status 255
	I1024 19:39:07.955139   39712 main.go:141] libmachine: (stopped-upgrade-130425) DBG | output  : 
	I1024 19:39:10.955567   39712 main.go:141] libmachine: (stopped-upgrade-130425) DBG | Getting to WaitForSSH function...
	I1024 19:39:10.958367   39712 main.go:141] libmachine: (stopped-upgrade-130425) DBG | domain stopped-upgrade-130425 has defined MAC address 52:54:00:22:57:7d in network minikube-net
	I1024 19:39:10.958845   39712 main.go:141] libmachine: (stopped-upgrade-130425) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:57:7d", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-10-24 20:36:51 +0000 UTC Type:0 Mac:52:54:00:22:57:7d Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:stopped-upgrade-130425 Clientid:01:52:54:00:22:57:7d}
	I1024 19:39:10.958881   39712 main.go:141] libmachine: (stopped-upgrade-130425) DBG | domain stopped-upgrade-130425 has defined IP address 192.168.50.34 and MAC address 52:54:00:22:57:7d in network minikube-net
	I1024 19:39:10.959011   39712 main.go:141] libmachine: (stopped-upgrade-130425) DBG | Using SSH client type: external
	I1024 19:39:10.959043   39712 main.go:141] libmachine: (stopped-upgrade-130425) DBG | Using SSH private key: /home/jenkins/minikube-integration/17485-9104/.minikube/machines/stopped-upgrade-130425/id_rsa (-rw-------)
	I1024 19:39:10.959075   39712 main.go:141] libmachine: (stopped-upgrade-130425) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.34 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17485-9104/.minikube/machines/stopped-upgrade-130425/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1024 19:39:10.959092   39712 main.go:141] libmachine: (stopped-upgrade-130425) DBG | About to run SSH command:
	I1024 19:39:10.959137   39712 main.go:141] libmachine: (stopped-upgrade-130425) DBG | exit 0
	I1024 19:39:11.085129   39712 main.go:141] libmachine: (stopped-upgrade-130425) DBG | SSH cmd err, output: <nil>: 
	I1024 19:39:11.085564   39712 main.go:141] libmachine: (stopped-upgrade-130425) Calling .GetConfigRaw
	I1024 19:39:11.086176   39712 main.go:141] libmachine: (stopped-upgrade-130425) Calling .GetIP
	I1024 19:39:11.088784   39712 main.go:141] libmachine: (stopped-upgrade-130425) DBG | domain stopped-upgrade-130425 has defined MAC address 52:54:00:22:57:7d in network minikube-net
	I1024 19:39:11.089193   39712 main.go:141] libmachine: (stopped-upgrade-130425) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:57:7d", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-10-24 20:36:51 +0000 UTC Type:0 Mac:52:54:00:22:57:7d Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:stopped-upgrade-130425 Clientid:01:52:54:00:22:57:7d}
	I1024 19:39:11.089215   39712 main.go:141] libmachine: (stopped-upgrade-130425) DBG | domain stopped-upgrade-130425 has defined IP address 192.168.50.34 and MAC address 52:54:00:22:57:7d in network minikube-net
	I1024 19:39:11.089420   39712 profile.go:148] Saving config to /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/stopped-upgrade-130425/config.json ...
	I1024 19:39:11.089590   39712 machine.go:88] provisioning docker machine ...
	I1024 19:39:11.089608   39712 main.go:141] libmachine: (stopped-upgrade-130425) Calling .DriverName
	I1024 19:39:11.089807   39712 main.go:141] libmachine: (stopped-upgrade-130425) Calling .GetMachineName
	I1024 19:39:11.089967   39712 buildroot.go:166] provisioning hostname "stopped-upgrade-130425"
	I1024 19:39:11.089984   39712 main.go:141] libmachine: (stopped-upgrade-130425) Calling .GetMachineName
	I1024 19:39:11.090112   39712 main.go:141] libmachine: (stopped-upgrade-130425) Calling .GetSSHHostname
	I1024 19:39:11.092536   39712 main.go:141] libmachine: (stopped-upgrade-130425) DBG | domain stopped-upgrade-130425 has defined MAC address 52:54:00:22:57:7d in network minikube-net
	I1024 19:39:11.092876   39712 main.go:141] libmachine: (stopped-upgrade-130425) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:57:7d", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-10-24 20:36:51 +0000 UTC Type:0 Mac:52:54:00:22:57:7d Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:stopped-upgrade-130425 Clientid:01:52:54:00:22:57:7d}
	I1024 19:39:11.092908   39712 main.go:141] libmachine: (stopped-upgrade-130425) DBG | domain stopped-upgrade-130425 has defined IP address 192.168.50.34 and MAC address 52:54:00:22:57:7d in network minikube-net
	I1024 19:39:11.093060   39712 main.go:141] libmachine: (stopped-upgrade-130425) Calling .GetSSHPort
	I1024 19:39:11.093205   39712 main.go:141] libmachine: (stopped-upgrade-130425) Calling .GetSSHKeyPath
	I1024 19:39:11.093329   39712 main.go:141] libmachine: (stopped-upgrade-130425) Calling .GetSSHKeyPath
	I1024 19:39:11.093477   39712 main.go:141] libmachine: (stopped-upgrade-130425) Calling .GetSSHUsername
	I1024 19:39:11.093652   39712 main.go:141] libmachine: Using SSH client type: native
	I1024 19:39:11.094017   39712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.50.34 22 <nil> <nil>}
	I1024 19:39:11.094057   39712 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-130425 && echo "stopped-upgrade-130425" | sudo tee /etc/hostname
	I1024 19:39:11.208118   39712 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-130425
	
	I1024 19:39:11.208145   39712 main.go:141] libmachine: (stopped-upgrade-130425) Calling .GetSSHHostname
	I1024 19:39:11.210887   39712 main.go:141] libmachine: (stopped-upgrade-130425) DBG | domain stopped-upgrade-130425 has defined MAC address 52:54:00:22:57:7d in network minikube-net
	I1024 19:39:11.211205   39712 main.go:141] libmachine: (stopped-upgrade-130425) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:57:7d", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-10-24 20:36:51 +0000 UTC Type:0 Mac:52:54:00:22:57:7d Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:stopped-upgrade-130425 Clientid:01:52:54:00:22:57:7d}
	I1024 19:39:11.211230   39712 main.go:141] libmachine: (stopped-upgrade-130425) DBG | domain stopped-upgrade-130425 has defined IP address 192.168.50.34 and MAC address 52:54:00:22:57:7d in network minikube-net
	I1024 19:39:11.211429   39712 main.go:141] libmachine: (stopped-upgrade-130425) Calling .GetSSHPort
	I1024 19:39:11.211626   39712 main.go:141] libmachine: (stopped-upgrade-130425) Calling .GetSSHKeyPath
	I1024 19:39:11.211814   39712 main.go:141] libmachine: (stopped-upgrade-130425) Calling .GetSSHKeyPath
	I1024 19:39:11.211980   39712 main.go:141] libmachine: (stopped-upgrade-130425) Calling .GetSSHUsername
	I1024 19:39:11.212154   39712 main.go:141] libmachine: Using SSH client type: native
	I1024 19:39:11.212668   39712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.50.34 22 <nil> <nil>}
	I1024 19:39:11.212697   39712 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-130425' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-130425/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-130425' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1024 19:39:11.326513   39712 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1024 19:39:11.326544   39712 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17485-9104/.minikube CaCertPath:/home/jenkins/minikube-integration/17485-9104/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17485-9104/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17485-9104/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17485-9104/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17485-9104/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17485-9104/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17485-9104/.minikube}
	I1024 19:39:11.326578   39712 buildroot.go:174] setting up certificates
	I1024 19:39:11.326596   39712 provision.go:83] configureAuth start
	I1024 19:39:11.326619   39712 main.go:141] libmachine: (stopped-upgrade-130425) Calling .GetMachineName
	I1024 19:39:11.326929   39712 main.go:141] libmachine: (stopped-upgrade-130425) Calling .GetIP
	I1024 19:39:11.329815   39712 main.go:141] libmachine: (stopped-upgrade-130425) DBG | domain stopped-upgrade-130425 has defined MAC address 52:54:00:22:57:7d in network minikube-net
	I1024 19:39:11.330086   39712 main.go:141] libmachine: (stopped-upgrade-130425) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:57:7d", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-10-24 20:36:51 +0000 UTC Type:0 Mac:52:54:00:22:57:7d Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:stopped-upgrade-130425 Clientid:01:52:54:00:22:57:7d}
	I1024 19:39:11.330123   39712 main.go:141] libmachine: (stopped-upgrade-130425) DBG | domain stopped-upgrade-130425 has defined IP address 192.168.50.34 and MAC address 52:54:00:22:57:7d in network minikube-net
	I1024 19:39:11.330294   39712 main.go:141] libmachine: (stopped-upgrade-130425) Calling .GetSSHHostname
	I1024 19:39:11.332964   39712 main.go:141] libmachine: (stopped-upgrade-130425) DBG | domain stopped-upgrade-130425 has defined MAC address 52:54:00:22:57:7d in network minikube-net
	I1024 19:39:11.333354   39712 main.go:141] libmachine: (stopped-upgrade-130425) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:57:7d", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-10-24 20:36:51 +0000 UTC Type:0 Mac:52:54:00:22:57:7d Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:stopped-upgrade-130425 Clientid:01:52:54:00:22:57:7d}
	I1024 19:39:11.333395   39712 main.go:141] libmachine: (stopped-upgrade-130425) DBG | domain stopped-upgrade-130425 has defined IP address 192.168.50.34 and MAC address 52:54:00:22:57:7d in network minikube-net
	I1024 19:39:11.333593   39712 provision.go:138] copyHostCerts
	I1024 19:39:11.333658   39712 exec_runner.go:144] found /home/jenkins/minikube-integration/17485-9104/.minikube/cert.pem, removing ...
	I1024 19:39:11.333692   39712 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17485-9104/.minikube/cert.pem
	I1024 19:39:11.333781   39712 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17485-9104/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17485-9104/.minikube/cert.pem (1123 bytes)
	I1024 19:39:11.333906   39712 exec_runner.go:144] found /home/jenkins/minikube-integration/17485-9104/.minikube/key.pem, removing ...
	I1024 19:39:11.333918   39712 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17485-9104/.minikube/key.pem
	I1024 19:39:11.333961   39712 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17485-9104/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17485-9104/.minikube/key.pem (1675 bytes)
	I1024 19:39:11.334062   39712 exec_runner.go:144] found /home/jenkins/minikube-integration/17485-9104/.minikube/ca.pem, removing ...
	I1024 19:39:11.334075   39712 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17485-9104/.minikube/ca.pem
	I1024 19:39:11.334116   39712 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17485-9104/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17485-9104/.minikube/ca.pem (1082 bytes)
	I1024 19:39:11.334186   39712 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17485-9104/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17485-9104/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17485-9104/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-130425 san=[192.168.50.34 192.168.50.34 localhost 127.0.0.1 minikube stopped-upgrade-130425]
	I1024 19:39:11.558523   39712 provision.go:172] copyRemoteCerts
	I1024 19:39:11.558591   39712 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1024 19:39:11.558613   39712 main.go:141] libmachine: (stopped-upgrade-130425) Calling .GetSSHHostname
	I1024 19:39:11.561405   39712 main.go:141] libmachine: (stopped-upgrade-130425) DBG | domain stopped-upgrade-130425 has defined MAC address 52:54:00:22:57:7d in network minikube-net
	I1024 19:39:11.561785   39712 main.go:141] libmachine: (stopped-upgrade-130425) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:57:7d", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-10-24 20:36:51 +0000 UTC Type:0 Mac:52:54:00:22:57:7d Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:stopped-upgrade-130425 Clientid:01:52:54:00:22:57:7d}
	I1024 19:39:11.561816   39712 main.go:141] libmachine: (stopped-upgrade-130425) DBG | domain stopped-upgrade-130425 has defined IP address 192.168.50.34 and MAC address 52:54:00:22:57:7d in network minikube-net
	I1024 19:39:11.561976   39712 main.go:141] libmachine: (stopped-upgrade-130425) Calling .GetSSHPort
	I1024 19:39:11.562167   39712 main.go:141] libmachine: (stopped-upgrade-130425) Calling .GetSSHKeyPath
	I1024 19:39:11.562336   39712 main.go:141] libmachine: (stopped-upgrade-130425) Calling .GetSSHUsername
	I1024 19:39:11.562465   39712 sshutil.go:53] new ssh client: &{IP:192.168.50.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9104/.minikube/machines/stopped-upgrade-130425/id_rsa Username:docker}
	I1024 19:39:11.648942   39712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9104/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1024 19:39:11.663582   39712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9104/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1024 19:39:11.677588   39712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9104/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1024 19:39:11.691885   39712 provision.go:86] duration metric: configureAuth took 365.27789ms
	I1024 19:39:11.691920   39712 buildroot.go:189] setting minikube options for container-runtime
	I1024 19:39:11.692121   39712 config.go:182] Loaded profile config "stopped-upgrade-130425": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.17.0
	I1024 19:39:11.692152   39712 main.go:141] libmachine: (stopped-upgrade-130425) Calling .DriverName
	I1024 19:39:11.692454   39712 main.go:141] libmachine: (stopped-upgrade-130425) Calling .GetSSHHostname
	I1024 19:39:11.695157   39712 main.go:141] libmachine: (stopped-upgrade-130425) DBG | domain stopped-upgrade-130425 has defined MAC address 52:54:00:22:57:7d in network minikube-net
	I1024 19:39:11.695565   39712 main.go:141] libmachine: (stopped-upgrade-130425) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:57:7d", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-10-24 20:36:51 +0000 UTC Type:0 Mac:52:54:00:22:57:7d Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:stopped-upgrade-130425 Clientid:01:52:54:00:22:57:7d}
	I1024 19:39:11.695593   39712 main.go:141] libmachine: (stopped-upgrade-130425) DBG | domain stopped-upgrade-130425 has defined IP address 192.168.50.34 and MAC address 52:54:00:22:57:7d in network minikube-net
	I1024 19:39:11.695779   39712 main.go:141] libmachine: (stopped-upgrade-130425) Calling .GetSSHPort
	I1024 19:39:11.695953   39712 main.go:141] libmachine: (stopped-upgrade-130425) Calling .GetSSHKeyPath
	I1024 19:39:11.696110   39712 main.go:141] libmachine: (stopped-upgrade-130425) Calling .GetSSHKeyPath
	I1024 19:39:11.696240   39712 main.go:141] libmachine: (stopped-upgrade-130425) Calling .GetSSHUsername
	I1024 19:39:11.696417   39712 main.go:141] libmachine: Using SSH client type: native
	I1024 19:39:11.696760   39712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.50.34 22 <nil> <nil>}
	I1024 19:39:11.696775   39712 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1024 19:39:11.807327   39712 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1024 19:39:11.807355   39712 buildroot.go:70] root file system type: tmpfs
	I1024 19:39:11.807519   39712 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1024 19:39:11.807553   39712 main.go:141] libmachine: (stopped-upgrade-130425) Calling .GetSSHHostname
	I1024 19:39:11.810257   39712 main.go:141] libmachine: (stopped-upgrade-130425) DBG | domain stopped-upgrade-130425 has defined MAC address 52:54:00:22:57:7d in network minikube-net
	I1024 19:39:11.810640   39712 main.go:141] libmachine: (stopped-upgrade-130425) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:57:7d", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-10-24 20:36:51 +0000 UTC Type:0 Mac:52:54:00:22:57:7d Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:stopped-upgrade-130425 Clientid:01:52:54:00:22:57:7d}
	I1024 19:39:11.810668   39712 main.go:141] libmachine: (stopped-upgrade-130425) DBG | domain stopped-upgrade-130425 has defined IP address 192.168.50.34 and MAC address 52:54:00:22:57:7d in network minikube-net
	I1024 19:39:11.810881   39712 main.go:141] libmachine: (stopped-upgrade-130425) Calling .GetSSHPort
	I1024 19:39:11.811068   39712 main.go:141] libmachine: (stopped-upgrade-130425) Calling .GetSSHKeyPath
	I1024 19:39:11.811200   39712 main.go:141] libmachine: (stopped-upgrade-130425) Calling .GetSSHKeyPath
	I1024 19:39:11.811322   39712 main.go:141] libmachine: (stopped-upgrade-130425) Calling .GetSSHUsername
	I1024 19:39:11.811472   39712 main.go:141] libmachine: Using SSH client type: native
	I1024 19:39:11.811792   39712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.50.34 22 <nil> <nil>}
	I1024 19:39:11.811849   39712 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1024 19:39:11.931766   39712 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1024 19:39:11.931810   39712 main.go:141] libmachine: (stopped-upgrade-130425) Calling .GetSSHHostname
	I1024 19:39:11.934886   39712 main.go:141] libmachine: (stopped-upgrade-130425) DBG | domain stopped-upgrade-130425 has defined MAC address 52:54:00:22:57:7d in network minikube-net
	I1024 19:39:11.935245   39712 main.go:141] libmachine: (stopped-upgrade-130425) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:57:7d", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-10-24 20:36:51 +0000 UTC Type:0 Mac:52:54:00:22:57:7d Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:stopped-upgrade-130425 Clientid:01:52:54:00:22:57:7d}
	I1024 19:39:11.935278   39712 main.go:141] libmachine: (stopped-upgrade-130425) DBG | domain stopped-upgrade-130425 has defined IP address 192.168.50.34 and MAC address 52:54:00:22:57:7d in network minikube-net
	I1024 19:39:11.935629   39712 main.go:141] libmachine: (stopped-upgrade-130425) Calling .GetSSHPort
	I1024 19:39:11.935856   39712 main.go:141] libmachine: (stopped-upgrade-130425) Calling .GetSSHKeyPath
	I1024 19:39:11.936033   39712 main.go:141] libmachine: (stopped-upgrade-130425) Calling .GetSSHKeyPath
	I1024 19:39:11.936164   39712 main.go:141] libmachine: (stopped-upgrade-130425) Calling .GetSSHUsername
	I1024 19:39:11.936365   39712 main.go:141] libmachine: Using SSH client type: native
	I1024 19:39:11.936782   39712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.50.34 22 <nil> <nil>}
	I1024 19:39:11.936803   39712 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1024 19:39:12.771325   39712 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1024 19:39:12.771357   39712 machine.go:91] provisioned docker machine in 1.681753139s
	I1024 19:39:12.771371   39712 start.go:300] post-start starting for "stopped-upgrade-130425" (driver="kvm2")
	I1024 19:39:12.771388   39712 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1024 19:39:12.771408   39712 main.go:141] libmachine: (stopped-upgrade-130425) Calling .DriverName
	I1024 19:39:12.771749   39712 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1024 19:39:12.771791   39712 main.go:141] libmachine: (stopped-upgrade-130425) Calling .GetSSHHostname
	I1024 19:39:12.774796   39712 main.go:141] libmachine: (stopped-upgrade-130425) DBG | domain stopped-upgrade-130425 has defined MAC address 52:54:00:22:57:7d in network minikube-net
	I1024 19:39:12.775225   39712 main.go:141] libmachine: (stopped-upgrade-130425) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:57:7d", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-10-24 20:36:51 +0000 UTC Type:0 Mac:52:54:00:22:57:7d Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:stopped-upgrade-130425 Clientid:01:52:54:00:22:57:7d}
	I1024 19:39:12.775265   39712 main.go:141] libmachine: (stopped-upgrade-130425) DBG | domain stopped-upgrade-130425 has defined IP address 192.168.50.34 and MAC address 52:54:00:22:57:7d in network minikube-net
	I1024 19:39:12.775425   39712 main.go:141] libmachine: (stopped-upgrade-130425) Calling .GetSSHPort
	I1024 19:39:12.775618   39712 main.go:141] libmachine: (stopped-upgrade-130425) Calling .GetSSHKeyPath
	I1024 19:39:12.775815   39712 main.go:141] libmachine: (stopped-upgrade-130425) Calling .GetSSHUsername
	I1024 19:39:12.775980   39712 sshutil.go:53] new ssh client: &{IP:192.168.50.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9104/.minikube/machines/stopped-upgrade-130425/id_rsa Username:docker}
	I1024 19:39:12.861143   39712 ssh_runner.go:195] Run: cat /etc/os-release
	I1024 19:39:12.865669   39712 info.go:137] Remote host: Buildroot 2019.02.7
	I1024 19:39:12.865692   39712 filesync.go:126] Scanning /home/jenkins/minikube-integration/17485-9104/.minikube/addons for local assets ...
	I1024 19:39:12.865786   39712 filesync.go:126] Scanning /home/jenkins/minikube-integration/17485-9104/.minikube/files for local assets ...
	I1024 19:39:12.865887   39712 filesync.go:149] local asset: /home/jenkins/minikube-integration/17485-9104/.minikube/files/etc/ssl/certs/164432.pem -> 164432.pem in /etc/ssl/certs
	I1024 19:39:12.866006   39712 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1024 19:39:12.872471   39712 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9104/.minikube/files/etc/ssl/certs/164432.pem --> /etc/ssl/certs/164432.pem (1708 bytes)
	I1024 19:39:12.886759   39712 start.go:303] post-start completed in 115.371273ms
	I1024 19:39:12.886783   39712 fix.go:56] fixHost completed within 33.138198674s
	I1024 19:39:12.886808   39712 main.go:141] libmachine: (stopped-upgrade-130425) Calling .GetSSHHostname
	I1024 19:39:12.889717   39712 main.go:141] libmachine: (stopped-upgrade-130425) DBG | domain stopped-upgrade-130425 has defined MAC address 52:54:00:22:57:7d in network minikube-net
	I1024 19:39:12.890067   39712 main.go:141] libmachine: (stopped-upgrade-130425) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:57:7d", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-10-24 20:36:51 +0000 UTC Type:0 Mac:52:54:00:22:57:7d Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:stopped-upgrade-130425 Clientid:01:52:54:00:22:57:7d}
	I1024 19:39:12.890105   39712 main.go:141] libmachine: (stopped-upgrade-130425) DBG | domain stopped-upgrade-130425 has defined IP address 192.168.50.34 and MAC address 52:54:00:22:57:7d in network minikube-net
	I1024 19:39:12.890301   39712 main.go:141] libmachine: (stopped-upgrade-130425) Calling .GetSSHPort
	I1024 19:39:12.890543   39712 main.go:141] libmachine: (stopped-upgrade-130425) Calling .GetSSHKeyPath
	I1024 19:39:12.890761   39712 main.go:141] libmachine: (stopped-upgrade-130425) Calling .GetSSHKeyPath
	I1024 19:39:12.890908   39712 main.go:141] libmachine: (stopped-upgrade-130425) Calling .GetSSHUsername
	I1024 19:39:12.891087   39712 main.go:141] libmachine: Using SSH client type: native
	I1024 19:39:12.891508   39712 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.50.34 22 <nil> <nil>}
	I1024 19:39:12.891522   39712 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1024 19:39:13.010622   39712 main.go:141] libmachine: SSH cmd err, output: <nil>: 1698176352.954432987
	
	I1024 19:39:13.010644   39712 fix.go:206] guest clock: 1698176352.954432987
	I1024 19:39:13.010652   39712 fix.go:219] Guest: 2023-10-24 19:39:12.954432987 +0000 UTC Remote: 2023-10-24 19:39:12.886787604 +0000 UTC m=+52.029187628 (delta=67.645383ms)
	I1024 19:39:13.010668   39712 fix.go:190] guest clock delta is within tolerance: 67.645383ms
	I1024 19:39:13.010673   39712 start.go:83] releasing machines lock for "stopped-upgrade-130425", held for 33.262136141s
	I1024 19:39:13.010701   39712 main.go:141] libmachine: (stopped-upgrade-130425) Calling .DriverName
	I1024 19:39:13.010958   39712 main.go:141] libmachine: (stopped-upgrade-130425) Calling .GetIP
	I1024 19:39:13.014016   39712 main.go:141] libmachine: (stopped-upgrade-130425) DBG | domain stopped-upgrade-130425 has defined MAC address 52:54:00:22:57:7d in network minikube-net
	I1024 19:39:13.014452   39712 main.go:141] libmachine: (stopped-upgrade-130425) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:57:7d", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-10-24 20:36:51 +0000 UTC Type:0 Mac:52:54:00:22:57:7d Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:stopped-upgrade-130425 Clientid:01:52:54:00:22:57:7d}
	I1024 19:39:13.014495   39712 main.go:141] libmachine: (stopped-upgrade-130425) DBG | domain stopped-upgrade-130425 has defined IP address 192.168.50.34 and MAC address 52:54:00:22:57:7d in network minikube-net
	I1024 19:39:13.014676   39712 main.go:141] libmachine: (stopped-upgrade-130425) Calling .DriverName
	I1024 19:39:13.015163   39712 main.go:141] libmachine: (stopped-upgrade-130425) Calling .DriverName
	I1024 19:39:13.015325   39712 main.go:141] libmachine: (stopped-upgrade-130425) Calling .DriverName
	I1024 19:39:13.015430   39712 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1024 19:39:13.015492   39712 main.go:141] libmachine: (stopped-upgrade-130425) Calling .GetSSHHostname
	I1024 19:39:13.015544   39712 ssh_runner.go:195] Run: cat /version.json
	I1024 19:39:13.015582   39712 main.go:141] libmachine: (stopped-upgrade-130425) Calling .GetSSHHostname
	I1024 19:39:13.018376   39712 main.go:141] libmachine: (stopped-upgrade-130425) DBG | domain stopped-upgrade-130425 has defined MAC address 52:54:00:22:57:7d in network minikube-net
	I1024 19:39:13.018587   39712 main.go:141] libmachine: (stopped-upgrade-130425) DBG | domain stopped-upgrade-130425 has defined MAC address 52:54:00:22:57:7d in network minikube-net
	I1024 19:39:13.018771   39712 main.go:141] libmachine: (stopped-upgrade-130425) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:57:7d", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-10-24 20:36:51 +0000 UTC Type:0 Mac:52:54:00:22:57:7d Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:stopped-upgrade-130425 Clientid:01:52:54:00:22:57:7d}
	I1024 19:39:13.018800   39712 main.go:141] libmachine: (stopped-upgrade-130425) DBG | domain stopped-upgrade-130425 has defined IP address 192.168.50.34 and MAC address 52:54:00:22:57:7d in network minikube-net
	I1024 19:39:13.018899   39712 main.go:141] libmachine: (stopped-upgrade-130425) Calling .GetSSHPort
	I1024 19:39:13.019036   39712 main.go:141] libmachine: (stopped-upgrade-130425) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:57:7d", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-10-24 20:36:51 +0000 UTC Type:0 Mac:52:54:00:22:57:7d Iaid: IPaddr:192.168.50.34 Prefix:24 Hostname:stopped-upgrade-130425 Clientid:01:52:54:00:22:57:7d}
	I1024 19:39:13.019055   39712 main.go:141] libmachine: (stopped-upgrade-130425) Calling .GetSSHKeyPath
	I1024 19:39:13.019068   39712 main.go:141] libmachine: (stopped-upgrade-130425) DBG | domain stopped-upgrade-130425 has defined IP address 192.168.50.34 and MAC address 52:54:00:22:57:7d in network minikube-net
	I1024 19:39:13.019194   39712 main.go:141] libmachine: (stopped-upgrade-130425) Calling .GetSSHUsername
	I1024 19:39:13.019290   39712 main.go:141] libmachine: (stopped-upgrade-130425) Calling .GetSSHPort
	I1024 19:39:13.019354   39712 sshutil.go:53] new ssh client: &{IP:192.168.50.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9104/.minikube/machines/stopped-upgrade-130425/id_rsa Username:docker}
	I1024 19:39:13.019447   39712 main.go:141] libmachine: (stopped-upgrade-130425) Calling .GetSSHKeyPath
	I1024 19:39:13.019587   39712 main.go:141] libmachine: (stopped-upgrade-130425) Calling .GetSSHUsername
	I1024 19:39:13.019754   39712 sshutil.go:53] new ssh client: &{IP:192.168.50.34 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9104/.minikube/machines/stopped-upgrade-130425/id_rsa Username:docker}
	W1024 19:39:13.107337   39712 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1024 19:39:13.107436   39712 ssh_runner.go:195] Run: systemctl --version
	I1024 19:39:13.130603   39712 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1024 19:39:13.136425   39712 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1024 19:39:13.136500   39712 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I1024 19:39:13.143065   39712 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I1024 19:39:13.148338   39712 cni.go:305] no active bridge cni configs found in "/etc/cni/net.d" - nothing to configure
	I1024 19:39:13.148369   39712 start.go:472] detecting cgroup driver to use...
	I1024 19:39:13.148480   39712 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1024 19:39:13.162674   39712 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.1"|' /etc/containerd/config.toml"
	I1024 19:39:13.170892   39712 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1024 19:39:13.177480   39712 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1024 19:39:13.177537   39712 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1024 19:39:13.185190   39712 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1024 19:39:13.191801   39712 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1024 19:39:13.198712   39712 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1024 19:39:13.205290   39712 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1024 19:39:13.212904   39712 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1024 19:39:13.219572   39712 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1024 19:39:13.226650   39712 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1024 19:39:13.233684   39712 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1024 19:39:13.315771   39712 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1024 19:39:13.335683   39712 start.go:472] detecting cgroup driver to use...
	I1024 19:39:13.335764   39712 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1024 19:39:13.350073   39712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1024 19:39:13.363427   39712 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1024 19:39:13.398348   39712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1024 19:39:13.411592   39712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1024 19:39:13.424685   39712 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I1024 19:39:13.437957   39712 ssh_runner.go:195] Run: which cri-dockerd
	I1024 19:39:13.442572   39712 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1024 19:39:13.450574   39712 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1024 19:39:13.464190   39712 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1024 19:39:13.555186   39712 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1024 19:39:13.645704   39712 docker.go:555] configuring docker to use "cgroupfs" as cgroup driver...
	I1024 19:39:13.645859   39712 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1024 19:39:13.656601   39712 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1024 19:39:13.753111   39712 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1024 19:39:14.844903   39712 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.091752726s)
	I1024 19:39:14.847197   39712 out.go:177] 
	W1024 19:39:14.848822   39712 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	W1024 19:39:14.848846   39712 out.go:239] * 
	* 
	W1024 19:39:14.849855   39712 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1024 19:39:14.850880   39712 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:213: upgrade from v1.6.2 to HEAD failed: out/minikube-linux-amd64 start -p stopped-upgrade-130425 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 : exit status 90
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (174.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (24.91s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-744739 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.28.3
E1024 19:53:29.610077   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/enable-default-cni-014827/client.crt: no such file or directory
E1024 19:53:29.615370   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/enable-default-cni-014827/client.crt: no such file or directory
E1024 19:53:29.625659   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/enable-default-cni-014827/client.crt: no such file or directory
E1024 19:53:29.645931   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/enable-default-cni-014827/client.crt: no such file or directory
E1024 19:53:29.686206   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/enable-default-cni-014827/client.crt: no such file or directory
E1024 19:53:29.766601   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/enable-default-cni-014827/client.crt: no such file or directory
E1024 19:53:29.927837   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/enable-default-cni-014827/client.crt: no such file or directory
E1024 19:53:30.248221   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/enable-default-cni-014827/client.crt: no such file or directory
E1024 19:53:30.889066   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/enable-default-cni-014827/client.crt: no such file or directory
E1024 19:53:32.170053   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/enable-default-cni-014827/client.crt: no such file or directory
E1024 19:53:34.730518   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/enable-default-cni-014827/client.crt: no such file or directory
E1024 19:53:36.697213   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/auto-014827/client.crt: no such file or directory
E1024 19:53:39.851090   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/enable-default-cni-014827/client.crt: no such file or directory
E1024 19:53:40.063039   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/false-014827/client.crt: no such file or directory
E1024 19:53:44.405086   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/calico-014827/client.crt: no such file or directory
E1024 19:53:44.560847   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/addons-903896/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p default-k8s-diff-port-744739 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.28.3: exit status 90 (24.619590356s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-744739] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17485
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17485-9104/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17485-9104/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting control plane node default-k8s-diff-port-744739 in cluster default-k8s-diff-port-744739
	* Restarting existing kvm2 VM for "default-k8s-diff-port-744739" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1024 19:53:20.304660   62204 out.go:296] Setting OutFile to fd 1 ...
	I1024 19:53:20.304831   62204 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 19:53:20.304845   62204 out.go:309] Setting ErrFile to fd 2...
	I1024 19:53:20.304853   62204 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 19:53:20.305058   62204 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17485-9104/.minikube/bin
	I1024 19:53:20.305642   62204 out.go:303] Setting JSON to false
	I1024 19:53:20.306634   62204 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":5498,"bootTime":1698171702,"procs":238,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1024 19:53:20.306693   62204 start.go:138] virtualization: kvm guest
	I1024 19:53:20.309184   62204 out.go:177] * [default-k8s-diff-port-744739] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I1024 19:53:20.311179   62204 out.go:177]   - MINIKUBE_LOCATION=17485
	I1024 19:53:20.311191   62204 notify.go:220] Checking for updates...
	I1024 19:53:20.312821   62204 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1024 19:53:20.314427   62204 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17485-9104/kubeconfig
	I1024 19:53:20.315993   62204 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17485-9104/.minikube
	I1024 19:53:20.317496   62204 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1024 19:53:20.318988   62204 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1024 19:53:20.321036   62204 config.go:182] Loaded profile config "default-k8s-diff-port-744739": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1024 19:53:20.321654   62204 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1024 19:53:20.321723   62204 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:53:20.336832   62204 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35473
	I1024 19:53:20.337721   62204 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:53:20.339756   62204 main.go:141] libmachine: Using API Version  1
	I1024 19:53:20.339780   62204 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:53:20.340256   62204 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:53:20.340456   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) Calling .DriverName
	I1024 19:53:20.340701   62204 driver.go:378] Setting default libvirt URI to qemu:///system
	I1024 19:53:20.341011   62204 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1024 19:53:20.341077   62204 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:53:20.355342   62204 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36841
	I1024 19:53:20.355827   62204 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:53:20.356311   62204 main.go:141] libmachine: Using API Version  1
	I1024 19:53:20.356335   62204 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:53:20.356679   62204 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:53:20.356885   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) Calling .DriverName
	I1024 19:53:20.394151   62204 out.go:177] * Using the kvm2 driver based on existing profile
	I1024 19:53:20.395554   62204 start.go:298] selected driver: kvm2
	I1024 19:53:20.395569   62204 start.go:902] validating driver "kvm2" against &{Name:default-k8s-diff-port-744739 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuber
netesConfig:{KubernetesVersion:v1.28.3 ClusterName:default-k8s-diff-port-744739 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.61.252 Port:8444 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:
<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1024 19:53:20.395683   62204 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1024 19:53:20.396349   62204 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1024 19:53:20.396438   62204 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17485-9104/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1024 19:53:20.411251   62204 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I1024 19:53:20.411769   62204 start_flags.go:926] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1024 19:53:20.411824   62204 cni.go:84] Creating CNI manager for ""
	I1024 19:53:20.411839   62204 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1024 19:53:20.411848   62204 start_flags.go:323] config:
	{Name:default-k8s-diff-port-744739 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:default-k8s-diff-port-74473
9 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.61.252 Port:8444 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false
ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1024 19:53:20.412109   62204 iso.go:125] acquiring lock: {Name:mkf528b771f12bbaddd502db30db0ccdeec4a711 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1024 19:53:20.415105   62204 out.go:177] * Starting control plane node default-k8s-diff-port-744739 in cluster default-k8s-diff-port-744739
	I1024 19:53:20.416661   62204 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
	I1024 19:53:20.416696   62204 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17485-9104/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4
	I1024 19:53:20.416705   62204 cache.go:57] Caching tarball of preloaded images
	I1024 19:53:20.416802   62204 preload.go:174] Found /home/jenkins/minikube-integration/17485-9104/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1024 19:53:20.416812   62204 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.3 on docker
	I1024 19:53:20.416931   62204 profile.go:148] Saving config to /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/default-k8s-diff-port-744739/config.json ...
	I1024 19:53:20.417157   62204 start.go:365] acquiring machines lock for default-k8s-diff-port-744739: {Name:mkcbabc1952bf564872040e51bac552940a65164 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1024 19:53:20.417204   62204 start.go:369] acquired machines lock for "default-k8s-diff-port-744739" in 26.448µs
	I1024 19:53:20.417223   62204 start.go:96] Skipping create...Using existing machine configuration
	I1024 19:53:20.417230   62204 fix.go:54] fixHost starting: 
	I1024 19:53:20.417535   62204 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1024 19:53:20.417559   62204 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:53:20.434327   62204 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41587
	I1024 19:53:20.434809   62204 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:53:20.435315   62204 main.go:141] libmachine: Using API Version  1
	I1024 19:53:20.435341   62204 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:53:20.435709   62204 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:53:20.435916   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) Calling .DriverName
	I1024 19:53:20.436066   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) Calling .GetState
	I1024 19:53:20.437864   62204 fix.go:102] recreateIfNeeded on default-k8s-diff-port-744739: state=Stopped err=<nil>
	I1024 19:53:20.437897   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) Calling .DriverName
	W1024 19:53:20.438114   62204 fix.go:128] unexpected machine state, will restart: <nil>
	I1024 19:53:20.440037   62204 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-744739" ...
	I1024 19:53:20.441848   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) Calling .Start
	I1024 19:53:20.442090   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) Ensuring networks are active...
	I1024 19:53:20.442999   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) Ensuring network default is active
	I1024 19:53:20.443338   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) Ensuring network mk-default-k8s-diff-port-744739 is active
	I1024 19:53:20.443779   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) Getting domain xml...
	I1024 19:53:20.444608   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) Creating domain...
	I1024 19:53:21.787954   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) Waiting to get IP...
	I1024 19:53:21.789048   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) DBG | domain default-k8s-diff-port-744739 has defined MAC address 52:54:00:69:38:1c in network mk-default-k8s-diff-port-744739
	I1024 19:53:21.789586   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) DBG | unable to find current IP address of domain default-k8s-diff-port-744739 in network mk-default-k8s-diff-port-744739
	I1024 19:53:21.789682   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) DBG | I1024 19:53:21.789578   62238 retry.go:31] will retry after 215.922706ms: waiting for machine to come up
	I1024 19:53:22.007110   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) DBG | domain default-k8s-diff-port-744739 has defined MAC address 52:54:00:69:38:1c in network mk-default-k8s-diff-port-744739
	I1024 19:53:22.007714   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) DBG | unable to find current IP address of domain default-k8s-diff-port-744739 in network mk-default-k8s-diff-port-744739
	I1024 19:53:22.007748   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) DBG | I1024 19:53:22.007676   62238 retry.go:31] will retry after 316.831892ms: waiting for machine to come up
	I1024 19:53:22.326379   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) DBG | domain default-k8s-diff-port-744739 has defined MAC address 52:54:00:69:38:1c in network mk-default-k8s-diff-port-744739
	I1024 19:53:22.326902   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) DBG | unable to find current IP address of domain default-k8s-diff-port-744739 in network mk-default-k8s-diff-port-744739
	I1024 19:53:22.326932   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) DBG | I1024 19:53:22.326861   62238 retry.go:31] will retry after 483.549517ms: waiting for machine to come up
	I1024 19:53:22.812204   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) DBG | domain default-k8s-diff-port-744739 has defined MAC address 52:54:00:69:38:1c in network mk-default-k8s-diff-port-744739
	I1024 19:53:22.812758   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) DBG | unable to find current IP address of domain default-k8s-diff-port-744739 in network mk-default-k8s-diff-port-744739
	I1024 19:53:22.812792   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) DBG | I1024 19:53:22.812707   62238 retry.go:31] will retry after 485.707144ms: waiting for machine to come up
	I1024 19:53:23.300253   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) DBG | domain default-k8s-diff-port-744739 has defined MAC address 52:54:00:69:38:1c in network mk-default-k8s-diff-port-744739
	I1024 19:53:23.300992   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) DBG | unable to find current IP address of domain default-k8s-diff-port-744739 in network mk-default-k8s-diff-port-744739
	I1024 19:53:23.301024   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) DBG | I1024 19:53:23.300936   62238 retry.go:31] will retry after 543.095748ms: waiting for machine to come up
	I1024 19:53:23.845210   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) DBG | domain default-k8s-diff-port-744739 has defined MAC address 52:54:00:69:38:1c in network mk-default-k8s-diff-port-744739
	I1024 19:53:23.845758   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) DBG | unable to find current IP address of domain default-k8s-diff-port-744739 in network mk-default-k8s-diff-port-744739
	I1024 19:53:23.845796   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) DBG | I1024 19:53:23.845714   62238 retry.go:31] will retry after 914.246273ms: waiting for machine to come up
	I1024 19:53:24.761889   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) DBG | domain default-k8s-diff-port-744739 has defined MAC address 52:54:00:69:38:1c in network mk-default-k8s-diff-port-744739
	I1024 19:53:24.762415   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) DBG | unable to find current IP address of domain default-k8s-diff-port-744739 in network mk-default-k8s-diff-port-744739
	I1024 19:53:24.762447   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) DBG | I1024 19:53:24.762350   62238 retry.go:31] will retry after 1.13398672s: waiting for machine to come up
	I1024 19:53:25.898153   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) DBG | domain default-k8s-diff-port-744739 has defined MAC address 52:54:00:69:38:1c in network mk-default-k8s-diff-port-744739
	I1024 19:53:25.898768   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) DBG | unable to find current IP address of domain default-k8s-diff-port-744739 in network mk-default-k8s-diff-port-744739
	I1024 19:53:25.898797   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) DBG | I1024 19:53:25.898717   62238 retry.go:31] will retry after 1.31096416s: waiting for machine to come up
	I1024 19:53:27.211323   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) DBG | domain default-k8s-diff-port-744739 has defined MAC address 52:54:00:69:38:1c in network mk-default-k8s-diff-port-744739
	I1024 19:53:27.211993   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) DBG | unable to find current IP address of domain default-k8s-diff-port-744739 in network mk-default-k8s-diff-port-744739
	I1024 19:53:27.212024   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) DBG | I1024 19:53:27.211924   62238 retry.go:31] will retry after 1.320887166s: waiting for machine to come up
	I1024 19:53:28.534340   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) DBG | domain default-k8s-diff-port-744739 has defined MAC address 52:54:00:69:38:1c in network mk-default-k8s-diff-port-744739
	I1024 19:53:28.534790   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) DBG | unable to find current IP address of domain default-k8s-diff-port-744739 in network mk-default-k8s-diff-port-744739
	I1024 19:53:28.534818   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) DBG | I1024 19:53:28.534737   62238 retry.go:31] will retry after 2.054136461s: waiting for machine to come up
	I1024 19:53:30.590832   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) DBG | domain default-k8s-diff-port-744739 has defined MAC address 52:54:00:69:38:1c in network mk-default-k8s-diff-port-744739
	I1024 19:53:30.591435   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) DBG | unable to find current IP address of domain default-k8s-diff-port-744739 in network mk-default-k8s-diff-port-744739
	I1024 19:53:30.591473   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) DBG | I1024 19:53:30.591362   62238 retry.go:31] will retry after 2.585688518s: waiting for machine to come up
	I1024 19:53:33.179297   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) DBG | domain default-k8s-diff-port-744739 has defined MAC address 52:54:00:69:38:1c in network mk-default-k8s-diff-port-744739
	I1024 19:53:33.179961   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) DBG | unable to find current IP address of domain default-k8s-diff-port-744739 in network mk-default-k8s-diff-port-744739
	I1024 19:53:33.179994   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) DBG | I1024 19:53:33.179895   62238 retry.go:31] will retry after 3.391449149s: waiting for machine to come up
	I1024 19:53:36.573392   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) DBG | domain default-k8s-diff-port-744739 has defined MAC address 52:54:00:69:38:1c in network mk-default-k8s-diff-port-744739
	I1024 19:53:36.573946   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) DBG | unable to find current IP address of domain default-k8s-diff-port-744739 in network mk-default-k8s-diff-port-744739
	I1024 19:53:36.574010   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) DBG | I1024 19:53:36.573870   62238 retry.go:31] will retry after 3.092695602s: waiting for machine to come up
	I1024 19:53:39.669341   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) DBG | domain default-k8s-diff-port-744739 has defined MAC address 52:54:00:69:38:1c in network mk-default-k8s-diff-port-744739
	I1024 19:53:39.669854   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) Found IP for machine: 192.168.61.252
	I1024 19:53:39.669891   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) DBG | domain default-k8s-diff-port-744739 has current primary IP address 192.168.61.252 and MAC address 52:54:00:69:38:1c in network mk-default-k8s-diff-port-744739
	I1024 19:53:39.669902   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) Reserving static IP address...
	I1024 19:53:39.670325   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-744739", mac: "52:54:00:69:38:1c", ip: "192.168.61.252"} in network mk-default-k8s-diff-port-744739: {Iface:virbr3 ExpiryTime:2023-10-24 20:51:27 +0000 UTC Type:0 Mac:52:54:00:69:38:1c Iaid: IPaddr:192.168.61.252 Prefix:24 Hostname:default-k8s-diff-port-744739 Clientid:01:52:54:00:69:38:1c}
	I1024 19:53:39.670360   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) Reserved static IP address: 192.168.61.252
	I1024 19:53:39.670383   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) DBG | skip adding static IP to network mk-default-k8s-diff-port-744739 - found existing host DHCP lease matching {name: "default-k8s-diff-port-744739", mac: "52:54:00:69:38:1c", ip: "192.168.61.252"}
	I1024 19:53:39.670400   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) Waiting for SSH to be available...
	I1024 19:53:39.670413   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) DBG | Getting to WaitForSSH function...
	I1024 19:53:39.672726   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) DBG | domain default-k8s-diff-port-744739 has defined MAC address 52:54:00:69:38:1c in network mk-default-k8s-diff-port-744739
	I1024 19:53:39.673116   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:38:1c", ip: ""} in network mk-default-k8s-diff-port-744739: {Iface:virbr3 ExpiryTime:2023-10-24 20:51:27 +0000 UTC Type:0 Mac:52:54:00:69:38:1c Iaid: IPaddr:192.168.61.252 Prefix:24 Hostname:default-k8s-diff-port-744739 Clientid:01:52:54:00:69:38:1c}
	I1024 19:53:39.673185   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) DBG | domain default-k8s-diff-port-744739 has defined IP address 192.168.61.252 and MAC address 52:54:00:69:38:1c in network mk-default-k8s-diff-port-744739
	I1024 19:53:39.673235   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) DBG | Using SSH client type: external
	I1024 19:53:39.673281   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) DBG | Using SSH private key: /home/jenkins/minikube-integration/17485-9104/.minikube/machines/default-k8s-diff-port-744739/id_rsa (-rw-------)
	I1024 19:53:39.673331   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.252 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17485-9104/.minikube/machines/default-k8s-diff-port-744739/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1024 19:53:39.673356   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) DBG | About to run SSH command:
	I1024 19:53:39.673373   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) DBG | exit 0
	I1024 19:53:39.762648   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) DBG | SSH cmd err, output: <nil>: 
	I1024 19:53:39.762985   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) Calling .GetConfigRaw
	I1024 19:53:39.763668   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) Calling .GetIP
	I1024 19:53:39.766257   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) DBG | domain default-k8s-diff-port-744739 has defined MAC address 52:54:00:69:38:1c in network mk-default-k8s-diff-port-744739
	I1024 19:53:39.766654   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:38:1c", ip: ""} in network mk-default-k8s-diff-port-744739: {Iface:virbr3 ExpiryTime:2023-10-24 20:51:27 +0000 UTC Type:0 Mac:52:54:00:69:38:1c Iaid: IPaddr:192.168.61.252 Prefix:24 Hostname:default-k8s-diff-port-744739 Clientid:01:52:54:00:69:38:1c}
	I1024 19:53:39.766700   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) DBG | domain default-k8s-diff-port-744739 has defined IP address 192.168.61.252 and MAC address 52:54:00:69:38:1c in network mk-default-k8s-diff-port-744739
	I1024 19:53:39.766908   62204 profile.go:148] Saving config to /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/default-k8s-diff-port-744739/config.json ...
	I1024 19:53:39.767084   62204 machine.go:88] provisioning docker machine ...
	I1024 19:53:39.767100   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) Calling .DriverName
	I1024 19:53:39.767284   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) Calling .GetMachineName
	I1024 19:53:39.767431   62204 buildroot.go:166] provisioning hostname "default-k8s-diff-port-744739"
	I1024 19:53:39.767450   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) Calling .GetMachineName
	I1024 19:53:39.767592   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) Calling .GetSSHHostname
	I1024 19:53:39.770282   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) DBG | domain default-k8s-diff-port-744739 has defined MAC address 52:54:00:69:38:1c in network mk-default-k8s-diff-port-744739
	I1024 19:53:39.770681   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:38:1c", ip: ""} in network mk-default-k8s-diff-port-744739: {Iface:virbr3 ExpiryTime:2023-10-24 20:51:27 +0000 UTC Type:0 Mac:52:54:00:69:38:1c Iaid: IPaddr:192.168.61.252 Prefix:24 Hostname:default-k8s-diff-port-744739 Clientid:01:52:54:00:69:38:1c}
	I1024 19:53:39.770724   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) DBG | domain default-k8s-diff-port-744739 has defined IP address 192.168.61.252 and MAC address 52:54:00:69:38:1c in network mk-default-k8s-diff-port-744739
	I1024 19:53:39.770850   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) Calling .GetSSHPort
	I1024 19:53:39.771039   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) Calling .GetSSHKeyPath
	I1024 19:53:39.771169   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) Calling .GetSSHKeyPath
	I1024 19:53:39.771263   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) Calling .GetSSHUsername
	I1024 19:53:39.771427   62204 main.go:141] libmachine: Using SSH client type: native
	I1024 19:53:39.771755   62204 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.61.252 22 <nil> <nil>}
	I1024 19:53:39.771769   62204 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-744739 && echo "default-k8s-diff-port-744739" | sudo tee /etc/hostname
	I1024 19:53:39.900654   62204 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-744739
	
	I1024 19:53:39.900688   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) Calling .GetSSHHostname
	I1024 19:53:39.903567   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) DBG | domain default-k8s-diff-port-744739 has defined MAC address 52:54:00:69:38:1c in network mk-default-k8s-diff-port-744739
	I1024 19:53:39.904006   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:38:1c", ip: ""} in network mk-default-k8s-diff-port-744739: {Iface:virbr3 ExpiryTime:2023-10-24 20:51:27 +0000 UTC Type:0 Mac:52:54:00:69:38:1c Iaid: IPaddr:192.168.61.252 Prefix:24 Hostname:default-k8s-diff-port-744739 Clientid:01:52:54:00:69:38:1c}
	I1024 19:53:39.904039   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) DBG | domain default-k8s-diff-port-744739 has defined IP address 192.168.61.252 and MAC address 52:54:00:69:38:1c in network mk-default-k8s-diff-port-744739
	I1024 19:53:39.904200   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) Calling .GetSSHPort
	I1024 19:53:39.904381   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) Calling .GetSSHKeyPath
	I1024 19:53:39.904570   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) Calling .GetSSHKeyPath
	I1024 19:53:39.904709   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) Calling .GetSSHUsername
	I1024 19:53:39.904878   62204 main.go:141] libmachine: Using SSH client type: native
	I1024 19:53:39.905296   62204 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.61.252 22 <nil> <nil>}
	I1024 19:53:39.905330   62204 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-744739' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-744739/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-744739' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1024 19:53:40.034223   62204 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1024 19:53:40.034260   62204 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17485-9104/.minikube CaCertPath:/home/jenkins/minikube-integration/17485-9104/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17485-9104/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17485-9104/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17485-9104/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17485-9104/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17485-9104/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17485-9104/.minikube}
	I1024 19:53:40.034285   62204 buildroot.go:174] setting up certificates
	I1024 19:53:40.034299   62204 provision.go:83] configureAuth start
	I1024 19:53:40.034311   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) Calling .GetMachineName
	I1024 19:53:40.034588   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) Calling .GetIP
	I1024 19:53:40.037644   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) DBG | domain default-k8s-diff-port-744739 has defined MAC address 52:54:00:69:38:1c in network mk-default-k8s-diff-port-744739
	I1024 19:53:40.038045   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:38:1c", ip: ""} in network mk-default-k8s-diff-port-744739: {Iface:virbr3 ExpiryTime:2023-10-24 20:51:27 +0000 UTC Type:0 Mac:52:54:00:69:38:1c Iaid: IPaddr:192.168.61.252 Prefix:24 Hostname:default-k8s-diff-port-744739 Clientid:01:52:54:00:69:38:1c}
	I1024 19:53:40.038077   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) DBG | domain default-k8s-diff-port-744739 has defined IP address 192.168.61.252 and MAC address 52:54:00:69:38:1c in network mk-default-k8s-diff-port-744739
	I1024 19:53:40.038179   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) Calling .GetSSHHostname
	I1024 19:53:40.040823   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) DBG | domain default-k8s-diff-port-744739 has defined MAC address 52:54:00:69:38:1c in network mk-default-k8s-diff-port-744739
	I1024 19:53:40.041146   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:38:1c", ip: ""} in network mk-default-k8s-diff-port-744739: {Iface:virbr3 ExpiryTime:2023-10-24 20:51:27 +0000 UTC Type:0 Mac:52:54:00:69:38:1c Iaid: IPaddr:192.168.61.252 Prefix:24 Hostname:default-k8s-diff-port-744739 Clientid:01:52:54:00:69:38:1c}
	I1024 19:53:40.041165   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) DBG | domain default-k8s-diff-port-744739 has defined IP address 192.168.61.252 and MAC address 52:54:00:69:38:1c in network mk-default-k8s-diff-port-744739
	I1024 19:53:40.041328   62204 provision.go:138] copyHostCerts
	I1024 19:53:40.041385   62204 exec_runner.go:144] found /home/jenkins/minikube-integration/17485-9104/.minikube/ca.pem, removing ...
	I1024 19:53:40.041406   62204 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17485-9104/.minikube/ca.pem
	I1024 19:53:40.041472   62204 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17485-9104/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17485-9104/.minikube/ca.pem (1082 bytes)
	I1024 19:53:40.041569   62204 exec_runner.go:144] found /home/jenkins/minikube-integration/17485-9104/.minikube/cert.pem, removing ...
	I1024 19:53:40.041579   62204 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17485-9104/.minikube/cert.pem
	I1024 19:53:40.041603   62204 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17485-9104/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17485-9104/.minikube/cert.pem (1123 bytes)
	I1024 19:53:40.041665   62204 exec_runner.go:144] found /home/jenkins/minikube-integration/17485-9104/.minikube/key.pem, removing ...
	I1024 19:53:40.041674   62204 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17485-9104/.minikube/key.pem
	I1024 19:53:40.041699   62204 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17485-9104/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17485-9104/.minikube/key.pem (1675 bytes)
	I1024 19:53:40.041758   62204 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17485-9104/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17485-9104/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17485-9104/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-744739 san=[192.168.61.252 192.168.61.252 localhost 127.0.0.1 minikube default-k8s-diff-port-744739]
	I1024 19:53:40.143050   62204 provision.go:172] copyRemoteCerts
	I1024 19:53:40.143131   62204 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1024 19:53:40.143159   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) Calling .GetSSHHostname
	I1024 19:53:40.146160   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) DBG | domain default-k8s-diff-port-744739 has defined MAC address 52:54:00:69:38:1c in network mk-default-k8s-diff-port-744739
	I1024 19:53:40.146532   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:38:1c", ip: ""} in network mk-default-k8s-diff-port-744739: {Iface:virbr3 ExpiryTime:2023-10-24 20:51:27 +0000 UTC Type:0 Mac:52:54:00:69:38:1c Iaid: IPaddr:192.168.61.252 Prefix:24 Hostname:default-k8s-diff-port-744739 Clientid:01:52:54:00:69:38:1c}
	I1024 19:53:40.146581   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) DBG | domain default-k8s-diff-port-744739 has defined IP address 192.168.61.252 and MAC address 52:54:00:69:38:1c in network mk-default-k8s-diff-port-744739
	I1024 19:53:40.146782   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) Calling .GetSSHPort
	I1024 19:53:40.146963   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) Calling .GetSSHKeyPath
	I1024 19:53:40.147125   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) Calling .GetSSHUsername
	I1024 19:53:40.147296   62204 sshutil.go:53] new ssh client: &{IP:192.168.61.252 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9104/.minikube/machines/default-k8s-diff-port-744739/id_rsa Username:docker}
	I1024 19:53:40.232763   62204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9104/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1024 19:53:40.259053   62204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9104/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I1024 19:53:40.286594   62204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9104/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1024 19:53:40.313509   62204 provision.go:86] duration metric: configureAuth took 279.196092ms
	I1024 19:53:40.313536   62204 buildroot.go:189] setting minikube options for container-runtime
	I1024 19:53:40.313771   62204 config.go:182] Loaded profile config "default-k8s-diff-port-744739": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1024 19:53:40.313797   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) Calling .DriverName
	I1024 19:53:40.314135   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) Calling .GetSSHHostname
	I1024 19:53:40.317288   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) DBG | domain default-k8s-diff-port-744739 has defined MAC address 52:54:00:69:38:1c in network mk-default-k8s-diff-port-744739
	I1024 19:53:40.317696   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:38:1c", ip: ""} in network mk-default-k8s-diff-port-744739: {Iface:virbr3 ExpiryTime:2023-10-24 20:51:27 +0000 UTC Type:0 Mac:52:54:00:69:38:1c Iaid: IPaddr:192.168.61.252 Prefix:24 Hostname:default-k8s-diff-port-744739 Clientid:01:52:54:00:69:38:1c}
	I1024 19:53:40.317727   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) DBG | domain default-k8s-diff-port-744739 has defined IP address 192.168.61.252 and MAC address 52:54:00:69:38:1c in network mk-default-k8s-diff-port-744739
	I1024 19:53:40.317887   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) Calling .GetSSHPort
	I1024 19:53:40.318075   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) Calling .GetSSHKeyPath
	I1024 19:53:40.318253   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) Calling .GetSSHKeyPath
	I1024 19:53:40.318406   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) Calling .GetSSHUsername
	I1024 19:53:40.318574   62204 main.go:141] libmachine: Using SSH client type: native
	I1024 19:53:40.319039   62204 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.61.252 22 <nil> <nil>}
	I1024 19:53:40.319062   62204 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1024 19:53:40.431835   62204 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1024 19:53:40.431863   62204 buildroot.go:70] root file system type: tmpfs
	I1024 19:53:40.431999   62204 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1024 19:53:40.432025   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) Calling .GetSSHHostname
	I1024 19:53:40.434799   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) DBG | domain default-k8s-diff-port-744739 has defined MAC address 52:54:00:69:38:1c in network mk-default-k8s-diff-port-744739
	I1024 19:53:40.435117   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:38:1c", ip: ""} in network mk-default-k8s-diff-port-744739: {Iface:virbr3 ExpiryTime:2023-10-24 20:51:27 +0000 UTC Type:0 Mac:52:54:00:69:38:1c Iaid: IPaddr:192.168.61.252 Prefix:24 Hostname:default-k8s-diff-port-744739 Clientid:01:52:54:00:69:38:1c}
	I1024 19:53:40.435153   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) DBG | domain default-k8s-diff-port-744739 has defined IP address 192.168.61.252 and MAC address 52:54:00:69:38:1c in network mk-default-k8s-diff-port-744739
	I1024 19:53:40.435296   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) Calling .GetSSHPort
	I1024 19:53:40.435525   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) Calling .GetSSHKeyPath
	I1024 19:53:40.435676   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) Calling .GetSSHKeyPath
	I1024 19:53:40.435852   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) Calling .GetSSHUsername
	I1024 19:53:40.436008   62204 main.go:141] libmachine: Using SSH client type: native
	I1024 19:53:40.436330   62204 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.61.252 22 <nil> <nil>}
	I1024 19:53:40.436402   62204 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1024 19:53:40.561154   62204 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1024 19:53:40.561189   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) Calling .GetSSHHostname
	I1024 19:53:40.563978   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) DBG | domain default-k8s-diff-port-744739 has defined MAC address 52:54:00:69:38:1c in network mk-default-k8s-diff-port-744739
	I1024 19:53:40.564249   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:38:1c", ip: ""} in network mk-default-k8s-diff-port-744739: {Iface:virbr3 ExpiryTime:2023-10-24 20:51:27 +0000 UTC Type:0 Mac:52:54:00:69:38:1c Iaid: IPaddr:192.168.61.252 Prefix:24 Hostname:default-k8s-diff-port-744739 Clientid:01:52:54:00:69:38:1c}
	I1024 19:53:40.564283   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) DBG | domain default-k8s-diff-port-744739 has defined IP address 192.168.61.252 and MAC address 52:54:00:69:38:1c in network mk-default-k8s-diff-port-744739
	I1024 19:53:40.564482   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) Calling .GetSSHPort
	I1024 19:53:40.564705   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) Calling .GetSSHKeyPath
	I1024 19:53:40.564908   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) Calling .GetSSHKeyPath
	I1024 19:53:40.565062   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) Calling .GetSSHUsername
	I1024 19:53:40.565234   62204 main.go:141] libmachine: Using SSH client type: native
	I1024 19:53:40.565620   62204 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.61.252 22 <nil> <nil>}
	I1024 19:53:40.565640   62204 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1024 19:53:41.581652   62204 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1024 19:53:41.581686   62204 machine.go:91] provisioned docker machine in 1.814589105s
	I1024 19:53:41.581696   62204 start.go:300] post-start starting for "default-k8s-diff-port-744739" (driver="kvm2")
	I1024 19:53:41.581705   62204 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1024 19:53:41.581724   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) Calling .DriverName
	I1024 19:53:41.582062   62204 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1024 19:53:41.582094   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) Calling .GetSSHHostname
	I1024 19:53:41.584515   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) DBG | domain default-k8s-diff-port-744739 has defined MAC address 52:54:00:69:38:1c in network mk-default-k8s-diff-port-744739
	I1024 19:53:41.584825   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:38:1c", ip: ""} in network mk-default-k8s-diff-port-744739: {Iface:virbr3 ExpiryTime:2023-10-24 20:51:27 +0000 UTC Type:0 Mac:52:54:00:69:38:1c Iaid: IPaddr:192.168.61.252 Prefix:24 Hostname:default-k8s-diff-port-744739 Clientid:01:52:54:00:69:38:1c}
	I1024 19:53:41.584855   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) DBG | domain default-k8s-diff-port-744739 has defined IP address 192.168.61.252 and MAC address 52:54:00:69:38:1c in network mk-default-k8s-diff-port-744739
	I1024 19:53:41.585010   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) Calling .GetSSHPort
	I1024 19:53:41.585203   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) Calling .GetSSHKeyPath
	I1024 19:53:41.585378   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) Calling .GetSSHUsername
	I1024 19:53:41.585536   62204 sshutil.go:53] new ssh client: &{IP:192.168.61.252 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9104/.minikube/machines/default-k8s-diff-port-744739/id_rsa Username:docker}
	I1024 19:53:41.669835   62204 ssh_runner.go:195] Run: cat /etc/os-release
	I1024 19:53:41.674139   62204 info.go:137] Remote host: Buildroot 2021.02.12
	I1024 19:53:41.674166   62204 filesync.go:126] Scanning /home/jenkins/minikube-integration/17485-9104/.minikube/addons for local assets ...
	I1024 19:53:41.674226   62204 filesync.go:126] Scanning /home/jenkins/minikube-integration/17485-9104/.minikube/files for local assets ...
	I1024 19:53:41.674308   62204 filesync.go:149] local asset: /home/jenkins/minikube-integration/17485-9104/.minikube/files/etc/ssl/certs/164432.pem -> 164432.pem in /etc/ssl/certs
	I1024 19:53:41.674387   62204 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1024 19:53:41.684677   62204 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9104/.minikube/files/etc/ssl/certs/164432.pem --> /etc/ssl/certs/164432.pem (1708 bytes)
	I1024 19:53:41.709017   62204 start.go:303] post-start completed in 127.309147ms
	I1024 19:53:41.709040   62204 fix.go:56] fixHost completed within 21.291811372s
	I1024 19:53:41.709102   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) Calling .GetSSHHostname
	I1024 19:53:41.711914   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) DBG | domain default-k8s-diff-port-744739 has defined MAC address 52:54:00:69:38:1c in network mk-default-k8s-diff-port-744739
	I1024 19:53:41.712239   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:38:1c", ip: ""} in network mk-default-k8s-diff-port-744739: {Iface:virbr3 ExpiryTime:2023-10-24 20:51:27 +0000 UTC Type:0 Mac:52:54:00:69:38:1c Iaid: IPaddr:192.168.61.252 Prefix:24 Hostname:default-k8s-diff-port-744739 Clientid:01:52:54:00:69:38:1c}
	I1024 19:53:41.712272   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) DBG | domain default-k8s-diff-port-744739 has defined IP address 192.168.61.252 and MAC address 52:54:00:69:38:1c in network mk-default-k8s-diff-port-744739
	I1024 19:53:41.712398   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) Calling .GetSSHPort
	I1024 19:53:41.712617   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) Calling .GetSSHKeyPath
	I1024 19:53:41.712780   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) Calling .GetSSHKeyPath
	I1024 19:53:41.712914   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) Calling .GetSSHUsername
	I1024 19:53:41.713124   62204 main.go:141] libmachine: Using SSH client type: native
	I1024 19:53:41.713570   62204 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.61.252 22 <nil> <nil>}
	I1024 19:53:41.713585   62204 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1024 19:53:41.826993   62204 main.go:141] libmachine: SSH cmd err, output: <nil>: 1698177221.777169376
	
	I1024 19:53:41.827015   62204 fix.go:206] guest clock: 1698177221.777169376
	I1024 19:53:41.827022   62204 fix.go:219] Guest: 2023-10-24 19:53:41.777169376 +0000 UTC Remote: 2023-10-24 19:53:41.709043932 +0000 UTC m=+21.459511469 (delta=68.125444ms)
	I1024 19:53:41.827039   62204 fix.go:190] guest clock delta is within tolerance: 68.125444ms
	I1024 19:53:41.827043   62204 start.go:83] releasing machines lock for "default-k8s-diff-port-744739", held for 21.409828104s
	I1024 19:53:41.827063   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) Calling .DriverName
	I1024 19:53:41.827345   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) Calling .GetIP
	I1024 19:53:41.830207   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) DBG | domain default-k8s-diff-port-744739 has defined MAC address 52:54:00:69:38:1c in network mk-default-k8s-diff-port-744739
	I1024 19:53:41.830617   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:38:1c", ip: ""} in network mk-default-k8s-diff-port-744739: {Iface:virbr3 ExpiryTime:2023-10-24 20:51:27 +0000 UTC Type:0 Mac:52:54:00:69:38:1c Iaid: IPaddr:192.168.61.252 Prefix:24 Hostname:default-k8s-diff-port-744739 Clientid:01:52:54:00:69:38:1c}
	I1024 19:53:41.830651   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) DBG | domain default-k8s-diff-port-744739 has defined IP address 192.168.61.252 and MAC address 52:54:00:69:38:1c in network mk-default-k8s-diff-port-744739
	I1024 19:53:41.830778   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) Calling .DriverName
	I1024 19:53:41.831328   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) Calling .DriverName
	I1024 19:53:41.831533   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) Calling .DriverName
	I1024 19:53:41.831624   62204 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1024 19:53:41.831665   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) Calling .GetSSHHostname
	I1024 19:53:41.831774   62204 ssh_runner.go:195] Run: cat /version.json
	I1024 19:53:41.831800   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) Calling .GetSSHHostname
	I1024 19:53:41.834630   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) DBG | domain default-k8s-diff-port-744739 has defined MAC address 52:54:00:69:38:1c in network mk-default-k8s-diff-port-744739
	I1024 19:53:41.834900   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) DBG | domain default-k8s-diff-port-744739 has defined MAC address 52:54:00:69:38:1c in network mk-default-k8s-diff-port-744739
	I1024 19:53:41.834942   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:38:1c", ip: ""} in network mk-default-k8s-diff-port-744739: {Iface:virbr3 ExpiryTime:2023-10-24 20:51:27 +0000 UTC Type:0 Mac:52:54:00:69:38:1c Iaid: IPaddr:192.168.61.252 Prefix:24 Hostname:default-k8s-diff-port-744739 Clientid:01:52:54:00:69:38:1c}
	I1024 19:53:41.834970   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) DBG | domain default-k8s-diff-port-744739 has defined IP address 192.168.61.252 and MAC address 52:54:00:69:38:1c in network mk-default-k8s-diff-port-744739
	I1024 19:53:41.835138   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) Calling .GetSSHPort
	I1024 19:53:41.835259   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:38:1c", ip: ""} in network mk-default-k8s-diff-port-744739: {Iface:virbr3 ExpiryTime:2023-10-24 20:51:27 +0000 UTC Type:0 Mac:52:54:00:69:38:1c Iaid: IPaddr:192.168.61.252 Prefix:24 Hostname:default-k8s-diff-port-744739 Clientid:01:52:54:00:69:38:1c}
	I1024 19:53:41.835289   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) DBG | domain default-k8s-diff-port-744739 has defined IP address 192.168.61.252 and MAC address 52:54:00:69:38:1c in network mk-default-k8s-diff-port-744739
	I1024 19:53:41.835312   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) Calling .GetSSHKeyPath
	I1024 19:53:41.835419   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) Calling .GetSSHPort
	I1024 19:53:41.835482   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) Calling .GetSSHUsername
	I1024 19:53:41.835638   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) Calling .GetSSHKeyPath
	I1024 19:53:41.835738   62204 sshutil.go:53] new ssh client: &{IP:192.168.61.252 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9104/.minikube/machines/default-k8s-diff-port-744739/id_rsa Username:docker}
	I1024 19:53:41.835748   62204 main.go:141] libmachine: (default-k8s-diff-port-744739) Calling .GetSSHUsername
	I1024 19:53:41.835912   62204 sshutil.go:53] new ssh client: &{IP:192.168.61.252 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9104/.minikube/machines/default-k8s-diff-port-744739/id_rsa Username:docker}
	I1024 19:53:41.944902   62204 ssh_runner.go:195] Run: systemctl --version
	I1024 19:53:41.952089   62204 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1024 19:53:41.958176   62204 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1024 19:53:41.958245   62204 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1024 19:53:41.974157   62204 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1024 19:53:41.974185   62204 start.go:472] detecting cgroup driver to use...
	I1024 19:53:41.974307   62204 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1024 19:53:41.992218   62204 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1024 19:53:42.001429   62204 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1024 19:53:42.011237   62204 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1024 19:53:42.011298   62204 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1024 19:53:42.020678   62204 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1024 19:53:42.030365   62204 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1024 19:53:42.039491   62204 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1024 19:53:42.048934   62204 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1024 19:53:42.058743   62204 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1024 19:53:42.068510   62204 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1024 19:53:42.077145   62204 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1024 19:53:42.086004   62204 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1024 19:53:42.205123   62204 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1024 19:53:42.226317   62204 start.go:472] detecting cgroup driver to use...
	I1024 19:53:42.226405   62204 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1024 19:53:42.248607   62204 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1024 19:53:42.268114   62204 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1024 19:53:42.293450   62204 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1024 19:53:42.306285   62204 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1024 19:53:42.320241   62204 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1024 19:53:42.350217   62204 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1024 19:53:42.364905   62204 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1024 19:53:42.385083   62204 ssh_runner.go:195] Run: which cri-dockerd
	I1024 19:53:42.389329   62204 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1024 19:53:42.397890   62204 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1024 19:53:42.418469   62204 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1024 19:53:42.528018   62204 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1024 19:53:42.642815   62204 docker.go:555] configuring docker to use "cgroupfs" as cgroup driver...
	I1024 19:53:42.642950   62204 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1024 19:53:42.659502   62204 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1024 19:53:42.785376   62204 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1024 19:53:44.338676   62204 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.553263243s)
	I1024 19:53:44.338751   62204 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1024 19:53:44.454544   62204 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1024 19:53:44.575233   62204 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1024 19:53:44.709254   62204 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1024 19:53:44.833392   62204 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1024 19:53:44.855321   62204 out.go:177] 
	W1024 19:53:44.857178   62204 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Job failed. See "journalctl -xe" for details.
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Job failed. See "journalctl -xe" for details.
	
	W1024 19:53:44.857197   62204 out.go:239] * 
	* 
	W1024 19:53:44.858309   62204 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1024 19:53:44.860228   62204 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p default-k8s-diff-port-744739 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.28.3": exit status 90
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-744739 -n default-k8s-diff-port-744739
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-744739 -n default-k8s-diff-port-744739: exit status 6 (277.709396ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1024 19:53:45.134052   62385 status.go:415] kubeconfig endpoint: extract IP: "default-k8s-diff-port-744739" does not appear in /home/jenkins/minikube-integration/17485-9104/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-744739" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (24.91s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-744739" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-744739 -n default-k8s-diff-port-744739
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-744739 -n default-k8s-diff-port-744739: exit status 6 (269.521868ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1024 19:53:45.408521   62413 status.go:415] kubeconfig endpoint: extract IP: "default-k8s-diff-port-744739" does not appear in /home/jenkins/minikube-integration/17485-9104/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-744739" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-744739" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-744739 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-744739 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (49.81615ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-744739" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-744739 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-744739 -n default-k8s-diff-port-744739
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-744739 -n default-k8s-diff-port-744739: exit status 6 (255.92083ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1024 19:53:45.712150   62451 status.go:415] kubeconfig endpoint: extract IP: "default-k8s-diff-port-744739" does not appear in /home/jenkins/minikube-integration/17485-9104/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-744739" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.31s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (2.49s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p default-k8s-diff-port-744739 "sudo crictl images -o json"
E1024 19:53:46.029415   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/kindnet-014827/client.crt: no such file or directory
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p default-k8s-diff-port-744739 "sudo crictl images -o json": exit status 1 (2.249188243s)

                                                
                                                
-- stdout --
	FATA[0002] validate service connection: validate CRI v1 image API for endpoint "unix:///var/run/cri-dockerd.sock": rpc error: code = DeadlineExceeded desc = context deadline exceeded 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
start_stop_delete_test.go:304: failed to get images inside minikube. args "out/minikube-linux-amd64 ssh -p default-k8s-diff-port-744739 \"sudo crictl images -o json\"": exit status 1
start_stop_delete_test.go:304: failed to decode images json invalid character '\x1b' looking for beginning of value. output:
FATA[0002] validate service connection: validate CRI v1 image API for endpoint "unix:///var/run/cri-dockerd.sock": rpc error: code = DeadlineExceeded desc = context deadline exceeded 
start_stop_delete_test.go:304: v1.28.3 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.10.1",
- 	"registry.k8s.io/etcd:3.5.9-0",
- 	"registry.k8s.io/kube-apiserver:v1.28.3",
- 	"registry.k8s.io/kube-controller-manager:v1.28.3",
- 	"registry.k8s.io/kube-proxy:v1.28.3",
- 	"registry.k8s.io/kube-scheduler:v1.28.3",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-744739 -n default-k8s-diff-port-744739
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-744739 -n default-k8s-diff-port-744739: exit status 6 (239.622095ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1024 19:53:48.203790   62521 status.go:415] kubeconfig endpoint: extract IP: "default-k8s-diff-port-744739" does not appear in /home/jenkins/minikube-integration/17485-9104/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-744739" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (2.49s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (1.99s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-744739 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p default-k8s-diff-port-744739 --alsologtostderr -v=1: exit status 80 (1.492564957s)

                                                
                                                
-- stdout --
	* Pausing node default-k8s-diff-port-744739 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1024 19:53:48.274346   62551 out.go:296] Setting OutFile to fd 1 ...
	I1024 19:53:48.274654   62551 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 19:53:48.274666   62551 out.go:309] Setting ErrFile to fd 2...
	I1024 19:53:48.274673   62551 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 19:53:48.274946   62551 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17485-9104/.minikube/bin
	I1024 19:53:48.275170   62551 out.go:303] Setting JSON to false
	I1024 19:53:48.275190   62551 mustload.go:65] Loading cluster: default-k8s-diff-port-744739
	I1024 19:53:48.275514   62551 config.go:182] Loaded profile config "default-k8s-diff-port-744739": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1024 19:53:48.275864   62551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1024 19:53:48.275906   62551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:53:48.291715   62551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45235
	I1024 19:53:48.292251   62551 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:53:48.292816   62551 main.go:141] libmachine: Using API Version  1
	I1024 19:53:48.292849   62551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:53:48.293159   62551 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:53:48.293358   62551 main.go:141] libmachine: (default-k8s-diff-port-744739) Calling .GetState
	I1024 19:53:48.295024   62551 host.go:66] Checking if "default-k8s-diff-port-744739" exists ...
	I1024 19:53:48.295311   62551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1024 19:53:48.295354   62551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:53:48.310636   62551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39737
	I1024 19:53:48.311037   62551 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:53:48.311552   62551 main.go:141] libmachine: Using API Version  1
	I1024 19:53:48.311589   62551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:53:48.311910   62551 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:53:48.312110   62551 main.go:141] libmachine: (default-k8s-diff-port-744739) Calling .DriverName
	I1024 19:53:48.313232   62551 pause.go:58] "namespaces" [kube-system kubernetes-dashboard storage-gluster istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-cni:%!s(bool=false)
extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.31.0-1697471113-17434/minikube-v1.31.0-1697471113-17434-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.31.0-1697471113-17434-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qemu-uri:qemu:///system listen-address: m
axauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string:/home/jenkins:/minikube-host mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:default-k8s-diff-port-744739 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1024 19:53:48.315843   62551 out.go:177] * Pausing node default-k8s-diff-port-744739 ... 
	I1024 19:53:48.317332   62551 host.go:66] Checking if "default-k8s-diff-port-744739" exists ...
	I1024 19:53:48.317616   62551 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1024 19:53:48.317662   62551 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:53:48.332442   62551 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38337
	I1024 19:53:48.332869   62551 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:53:48.333305   62551 main.go:141] libmachine: Using API Version  1
	I1024 19:53:48.333329   62551 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:53:48.333603   62551 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:53:48.333820   62551 main.go:141] libmachine: (default-k8s-diff-port-744739) Calling .DriverName
	I1024 19:53:48.334018   62551 ssh_runner.go:195] Run: systemctl --version
	I1024 19:53:48.334054   62551 main.go:141] libmachine: (default-k8s-diff-port-744739) Calling .GetSSHHostname
	I1024 19:53:48.336983   62551 main.go:141] libmachine: (default-k8s-diff-port-744739) DBG | domain default-k8s-diff-port-744739 has defined MAC address 52:54:00:69:38:1c in network mk-default-k8s-diff-port-744739
	I1024 19:53:48.337422   62551 main.go:141] libmachine: (default-k8s-diff-port-744739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:69:38:1c", ip: ""} in network mk-default-k8s-diff-port-744739: {Iface:virbr3 ExpiryTime:2023-10-24 20:51:27 +0000 UTC Type:0 Mac:52:54:00:69:38:1c Iaid: IPaddr:192.168.61.252 Prefix:24 Hostname:default-k8s-diff-port-744739 Clientid:01:52:54:00:69:38:1c}
	I1024 19:53:48.337469   62551 main.go:141] libmachine: (default-k8s-diff-port-744739) DBG | domain default-k8s-diff-port-744739 has defined IP address 192.168.61.252 and MAC address 52:54:00:69:38:1c in network mk-default-k8s-diff-port-744739
	I1024 19:53:48.337581   62551 main.go:141] libmachine: (default-k8s-diff-port-744739) Calling .GetSSHPort
	I1024 19:53:48.337755   62551 main.go:141] libmachine: (default-k8s-diff-port-744739) Calling .GetSSHKeyPath
	I1024 19:53:48.337940   62551 main.go:141] libmachine: (default-k8s-diff-port-744739) Calling .GetSSHUsername
	I1024 19:53:48.338087   62551 sshutil.go:53] new ssh client: &{IP:192.168.61.252 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9104/.minikube/machines/default-k8s-diff-port-744739/id_rsa Username:docker}
	I1024 19:53:48.424441   62551 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1024 19:53:48.439050   62551 pause.go:51] kubelet running: false
	I1024 19:53:48.439127   62551 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1024 19:53:48.454681   62551 retry.go:31] will retry after 223.498375ms: kubelet disable --now: sudo systemctl disable --now kubelet: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file kubelet.service does not exist.
	I1024 19:53:48.679177   62551 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1024 19:53:48.694864   62551 pause.go:51] kubelet running: false
	I1024 19:53:48.694956   62551 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1024 19:53:48.709984   62551 retry.go:31] will retry after 276.55117ms: kubelet disable --now: sudo systemctl disable --now kubelet: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file kubelet.service does not exist.
	I1024 19:53:48.987481   62551 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1024 19:53:49.003242   62551 pause.go:51] kubelet running: false
	I1024 19:53:49.003300   62551 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1024 19:53:49.018532   62551 retry.go:31] will retry after 649.907ms: kubelet disable --now: sudo systemctl disable --now kubelet: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file kubelet.service does not exist.
	I1024 19:53:49.669174   62551 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1024 19:53:49.684838   62551 pause.go:51] kubelet running: false
	I1024 19:53:49.684903   62551 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1024 19:53:49.702521   62551 out.go:177] 
	W1024 19:53:49.704143   62551 out.go:239] X Exiting due to GUEST_PAUSE: Pause: kubelet disable --now: sudo systemctl disable --now kubelet: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file kubelet.service does not exist.
	
	X Exiting due to GUEST_PAUSE: Pause: kubelet disable --now: sudo systemctl disable --now kubelet: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file kubelet.service does not exist.
	
	W1024 19:53:49.704170   62551 out.go:239] * 
	* 
	W1024 19:53:49.707968   62551 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1024 19:53:49.709295   62551 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-linux-amd64 pause -p default-k8s-diff-port-744739 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-744739 -n default-k8s-diff-port-744739
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-744739 -n default-k8s-diff-port-744739: exit status 6 (246.858527ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1024 19:53:49.946246   62581 status.go:415] kubeconfig endpoint: extract IP: "default-k8s-diff-port-744739" does not appear in /home/jenkins/minikube-integration/17485-9104/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-744739" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-744739 -n default-k8s-diff-port-744739
E1024 19:53:50.092270   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/enable-default-cni-014827/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-744739 -n default-k8s-diff-port-744739: exit status 6 (245.889334ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1024 19:53:50.187280   62611 status.go:415] kubeconfig endpoint: extract IP: "default-k8s-diff-port-744739" does not appear in /home/jenkins/minikube-integration/17485-9104/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-744739" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (1.99s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (2.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p old-k8s-version-531596 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p old-k8s-version-531596 "sudo crictl images -o json": exit status 1 (230.202084ms)

                                                
                                                
-- stdout --
	FATA[0000] validate service connection: validate CRI v1 image API for endpoint "unix:///var/run/dockershim.sock": rpc error: code = Unimplemented desc = unknown service runtime.v1.ImageService 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
start_stop_delete_test.go:304: failed to get images inside minikube. args "out/minikube-linux-amd64 ssh -p old-k8s-version-531596 \"sudo crictl images -o json\"": exit status 1
start_stop_delete_test.go:304: failed to decode images json invalid character '\x1b' looking for beginning of value. output:
FATA[0000] validate service connection: validate CRI v1 image API for endpoint "unix:///var/run/dockershim.sock": rpc error: code = Unimplemented desc = unknown service runtime.v1.ImageService 
start_stop_delete_test.go:304: v1.16.0 images missing (-want +got):
  []string{
- 	"k8s.gcr.io/coredns:1.6.2",
- 	"k8s.gcr.io/etcd:3.3.15-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.16.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.16.0",
- 	"k8s.gcr.io/kube-proxy:v1.16.0",
- 	"k8s.gcr.io/kube-scheduler:v1.16.0",
- 	"k8s.gcr.io/pause:3.1",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-531596 -n old-k8s-version-531596
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-531596 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-531596 logs -n 25: (1.016049755s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p                                                     | default-k8s-diff-port-744739 | jenkins | v1.31.2 | 24 Oct 23 19:53 UTC |                     |
	|         | default-k8s-diff-port-744739                           |                              |         |         |                     |                     |
	|         | sudo crictl images -o json                             |                              |         |         |                     |                     |
	| pause   | -p                                                     | default-k8s-diff-port-744739 | jenkins | v1.31.2 | 24 Oct 23 19:53 UTC |                     |
	|         | default-k8s-diff-port-744739                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-744739 | jenkins | v1.31.2 | 24 Oct 23 19:53 UTC | 24 Oct 23 19:53 UTC |
	|         | default-k8s-diff-port-744739                           |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-744739 | jenkins | v1.31.2 | 24 Oct 23 19:53 UTC | 24 Oct 23 19:53 UTC |
	|         | default-k8s-diff-port-744739                           |                              |         |         |                     |                     |
	| start   | -p newest-cni-468999 --memory=2200 --alsologtostderr   | newest-cni-468999            | jenkins | v1.31.2 | 24 Oct 23 19:53 UTC | 24 Oct 23 19:55 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --kubernetes-version=v1.28.3            |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-468999             | newest-cni-468999            | jenkins | v1.31.2 | 24 Oct 23 19:55 UTC | 24 Oct 23 19:55 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-468999                                   | newest-cni-468999            | jenkins | v1.31.2 | 24 Oct 23 19:55 UTC | 24 Oct 23 19:55 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-468999                  | newest-cni-468999            | jenkins | v1.31.2 | 24 Oct 23 19:55 UTC | 24 Oct 23 19:55 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-468999 --memory=2200 --alsologtostderr   | newest-cni-468999            | jenkins | v1.31.2 | 24 Oct 23 19:55 UTC | 24 Oct 23 19:56 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --kubernetes-version=v1.28.3            |                              |         |         |                     |                     |
	| ssh     | -p newest-cni-468999 sudo                              | newest-cni-468999            | jenkins | v1.31.2 | 24 Oct 23 19:56 UTC | 24 Oct 23 19:56 UTC |
	|         | crictl images -o json                                  |                              |         |         |                     |                     |
	| pause   | -p newest-cni-468999                                   | newest-cni-468999            | jenkins | v1.31.2 | 24 Oct 23 19:56 UTC | 24 Oct 23 19:56 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-468999                                   | newest-cni-468999            | jenkins | v1.31.2 | 24 Oct 23 19:56 UTC | 24 Oct 23 19:56 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-468999                                   | newest-cni-468999            | jenkins | v1.31.2 | 24 Oct 23 19:56 UTC | 24 Oct 23 19:56 UTC |
	| delete  | -p newest-cni-468999                                   | newest-cni-468999            | jenkins | v1.31.2 | 24 Oct 23 19:56 UTC | 24 Oct 23 19:56 UTC |
	| ssh     | -p no-preload-301948 sudo                              | no-preload-301948            | jenkins | v1.31.2 | 24 Oct 23 19:58 UTC | 24 Oct 23 19:58 UTC |
	|         | crictl images -o json                                  |                              |         |         |                     |                     |
	| pause   | -p no-preload-301948                                   | no-preload-301948            | jenkins | v1.31.2 | 24 Oct 23 19:58 UTC | 24 Oct 23 19:58 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p no-preload-301948                                   | no-preload-301948            | jenkins | v1.31.2 | 24 Oct 23 19:58 UTC | 24 Oct 23 19:58 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p no-preload-301948                                   | no-preload-301948            | jenkins | v1.31.2 | 24 Oct 23 19:58 UTC | 24 Oct 23 19:58 UTC |
	| delete  | -p no-preload-301948                                   | no-preload-301948            | jenkins | v1.31.2 | 24 Oct 23 19:58 UTC | 24 Oct 23 19:58 UTC |
	| ssh     | -p embed-certs-585475 sudo                             | embed-certs-585475           | jenkins | v1.31.2 | 24 Oct 23 19:58 UTC | 24 Oct 23 19:58 UTC |
	|         | crictl images -o json                                  |                              |         |         |                     |                     |
	| pause   | -p embed-certs-585475                                  | embed-certs-585475           | jenkins | v1.31.2 | 24 Oct 23 19:58 UTC | 24 Oct 23 19:58 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p embed-certs-585475                                  | embed-certs-585475           | jenkins | v1.31.2 | 24 Oct 23 19:58 UTC | 24 Oct 23 19:58 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p embed-certs-585475                                  | embed-certs-585475           | jenkins | v1.31.2 | 24 Oct 23 19:58 UTC | 24 Oct 23 19:58 UTC |
	| delete  | -p embed-certs-585475                                  | embed-certs-585475           | jenkins | v1.31.2 | 24 Oct 23 19:58 UTC | 24 Oct 23 19:58 UTC |
	| ssh     | -p old-k8s-version-531596 sudo                         | old-k8s-version-531596       | jenkins | v1.31.2 | 24 Oct 23 20:00 UTC |                     |
	|         | crictl images -o json                                  |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/24 19:55:16
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.21.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1024 19:55:16.279761   63363 out.go:296] Setting OutFile to fd 1 ...
	I1024 19:55:16.280000   63363 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 19:55:16.280008   63363 out.go:309] Setting ErrFile to fd 2...
	I1024 19:55:16.280013   63363 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 19:55:16.280196   63363 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17485-9104/.minikube/bin
	I1024 19:55:16.280834   63363 out.go:303] Setting JSON to false
	I1024 19:55:16.281789   63363 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":5614,"bootTime":1698171702,"procs":240,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1024 19:55:16.281869   63363 start.go:138] virtualization: kvm guest
	I1024 19:55:16.284279   63363 out.go:177] * [newest-cni-468999] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I1024 19:55:16.286345   63363 out.go:177]   - MINIKUBE_LOCATION=17485
	I1024 19:55:16.286442   63363 notify.go:220] Checking for updates...
	I1024 19:55:16.287971   63363 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1024 19:55:16.289701   63363 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17485-9104/kubeconfig
	I1024 19:55:16.291170   63363 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17485-9104/.minikube
	I1024 19:55:16.292778   63363 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1024 19:55:16.294233   63363 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1024 19:55:16.296105   63363 config.go:182] Loaded profile config "newest-cni-468999": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1024 19:55:16.296518   63363 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1024 19:55:16.296588   63363 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:55:16.312316   63363 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45693
	I1024 19:55:16.312979   63363 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:55:16.314183   63363 main.go:141] libmachine: Using API Version  1
	I1024 19:55:16.314213   63363 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:55:16.314796   63363 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:55:16.315016   63363 main.go:141] libmachine: (newest-cni-468999) Calling .DriverName
	I1024 19:55:16.315270   63363 driver.go:378] Setting default libvirt URI to qemu:///system
	I1024 19:55:16.315574   63363 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1024 19:55:16.315608   63363 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:55:16.330548   63363 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40353
	I1024 19:55:16.330996   63363 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:55:16.331478   63363 main.go:141] libmachine: Using API Version  1
	I1024 19:55:16.331509   63363 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:55:16.331851   63363 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:55:16.332034   63363 main.go:141] libmachine: (newest-cni-468999) Calling .DriverName
	I1024 19:55:16.371953   63363 out.go:177] * Using the kvm2 driver based on existing profile
	I1024 19:55:16.373478   63363 start.go:298] selected driver: kvm2
	I1024 19:55:16.373493   63363 start.go:902] validating driver "kvm2" against &{Name:newest-cni-468999 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.3 ClusterName:newest-cni-468999 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.68 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:
false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1024 19:55:16.373607   63363 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1024 19:55:16.374320   63363 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1024 19:55:16.374399   63363 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17485-9104/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1024 19:55:16.390592   63363 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I1024 19:55:16.390942   63363 start_flags.go:945] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1024 19:55:16.390996   63363 cni.go:84] Creating CNI manager for ""
	I1024 19:55:16.391010   63363 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1024 19:55:16.391016   63363 start_flags.go:323] config:
	{Name:newest-cni-468999 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:newest-cni-468999 Namespace:default AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.68 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts
:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1024 19:55:16.391147   63363 iso.go:125] acquiring lock: {Name:mkf528b771f12bbaddd502db30db0ccdeec4a711 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1024 19:55:16.392892   63363 out.go:177] * Starting control plane node newest-cni-468999 in cluster newest-cni-468999
	I1024 19:55:16.394197   63363 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
	I1024 19:55:16.394232   63363 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17485-9104/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4
	I1024 19:55:16.394243   63363 cache.go:57] Caching tarball of preloaded images
	I1024 19:55:16.394332   63363 preload.go:174] Found /home/jenkins/minikube-integration/17485-9104/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1024 19:55:16.394343   63363 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.3 on docker
	I1024 19:55:16.394433   63363 profile.go:148] Saving config to /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/newest-cni-468999/config.json ...
	I1024 19:55:16.394607   63363 start.go:365] acquiring machines lock for newest-cni-468999: {Name:mkcbabc1952bf564872040e51bac552940a65164 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1024 19:55:16.394643   63363 start.go:369] acquired machines lock for "newest-cni-468999" in 19.742µs
	I1024 19:55:16.394655   63363 start.go:96] Skipping create...Using existing machine configuration
	I1024 19:55:16.394661   63363 fix.go:54] fixHost starting: 
	I1024 19:55:16.394917   63363 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1024 19:55:16.394950   63363 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:55:16.409527   63363 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46481
	I1024 19:55:16.409952   63363 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:55:16.410399   63363 main.go:141] libmachine: Using API Version  1
	I1024 19:55:16.410425   63363 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:55:16.410806   63363 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:55:16.410969   63363 main.go:141] libmachine: (newest-cni-468999) Calling .DriverName
	I1024 19:55:16.411141   63363 main.go:141] libmachine: (newest-cni-468999) Calling .GetState
	I1024 19:55:16.412762   63363 fix.go:102] recreateIfNeeded on newest-cni-468999: state=Stopped err=<nil>
	I1024 19:55:16.412801   63363 main.go:141] libmachine: (newest-cni-468999) Calling .DriverName
	W1024 19:55:16.412975   63363 fix.go:128] unexpected machine state, will restart: <nil>
	I1024 19:55:16.415001   63363 out.go:177] * Restarting existing kvm2 VM for "newest-cni-468999" ...
	I1024 19:55:12.605426   61522 pod_ready.go:102] pod "metrics-server-74d5856cc6-4pkrg" in "kube-system" namespace has status "Ready":"False"
	I1024 19:55:14.606988   61522 pod_ready.go:102] pod "metrics-server-74d5856cc6-4pkrg" in "kube-system" namespace has status "Ready":"False"
	I1024 19:55:12.883691   61295 pod_ready.go:102] pod "metrics-server-57f55c9bc5-n9drb" in "kube-system" namespace has status "Ready":"False"
	I1024 19:55:14.883905   61295 pod_ready.go:102] pod "metrics-server-57f55c9bc5-n9drb" in "kube-system" namespace has status "Ready":"False"
	I1024 19:55:17.382633   61295 pod_ready.go:102] pod "metrics-server-57f55c9bc5-n9drb" in "kube-system" namespace has status "Ready":"False"
	I1024 19:55:14.087211   61871 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gsn2q" in "kube-system" namespace has status "Ready":"False"
	I1024 19:55:16.088394   61871 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gsn2q" in "kube-system" namespace has status "Ready":"False"
	I1024 19:55:18.589285   61871 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gsn2q" in "kube-system" namespace has status "Ready":"False"
	I1024 19:55:16.416392   63363 main.go:141] libmachine: (newest-cni-468999) Calling .Start
	I1024 19:55:16.416578   63363 main.go:141] libmachine: (newest-cni-468999) Ensuring networks are active...
	I1024 19:55:16.417404   63363 main.go:141] libmachine: (newest-cni-468999) Ensuring network default is active
	I1024 19:55:16.417771   63363 main.go:141] libmachine: (newest-cni-468999) Ensuring network mk-newest-cni-468999 is active
	I1024 19:55:16.418273   63363 main.go:141] libmachine: (newest-cni-468999) Getting domain xml...
	I1024 19:55:16.419086   63363 main.go:141] libmachine: (newest-cni-468999) Creating domain...
	I1024 19:55:17.704142   63363 main.go:141] libmachine: (newest-cni-468999) Waiting to get IP...
	I1024 19:55:17.705069   63363 main.go:141] libmachine: (newest-cni-468999) DBG | domain newest-cni-468999 has defined MAC address 52:54:00:5f:c0:60 in network mk-newest-cni-468999
	I1024 19:55:17.705673   63363 main.go:141] libmachine: (newest-cni-468999) DBG | unable to find current IP address of domain newest-cni-468999 in network mk-newest-cni-468999
	I1024 19:55:17.705732   63363 main.go:141] libmachine: (newest-cni-468999) DBG | I1024 19:55:17.705635   63408 retry.go:31] will retry after 285.646827ms: waiting for machine to come up
	I1024 19:55:17.993281   63363 main.go:141] libmachine: (newest-cni-468999) DBG | domain newest-cni-468999 has defined MAC address 52:54:00:5f:c0:60 in network mk-newest-cni-468999
	I1024 19:55:17.993830   63363 main.go:141] libmachine: (newest-cni-468999) DBG | unable to find current IP address of domain newest-cni-468999 in network mk-newest-cni-468999
	I1024 19:55:17.993864   63363 main.go:141] libmachine: (newest-cni-468999) DBG | I1024 19:55:17.993776   63408 retry.go:31] will retry after 327.09468ms: waiting for machine to come up
	I1024 19:55:18.322260   63363 main.go:141] libmachine: (newest-cni-468999) DBG | domain newest-cni-468999 has defined MAC address 52:54:00:5f:c0:60 in network mk-newest-cni-468999
	I1024 19:55:18.322822   63363 main.go:141] libmachine: (newest-cni-468999) DBG | unable to find current IP address of domain newest-cni-468999 in network mk-newest-cni-468999
	I1024 19:55:18.322861   63363 main.go:141] libmachine: (newest-cni-468999) DBG | I1024 19:55:18.322763   63408 retry.go:31] will retry after 334.179389ms: waiting for machine to come up
	I1024 19:55:18.658226   63363 main.go:141] libmachine: (newest-cni-468999) DBG | domain newest-cni-468999 has defined MAC address 52:54:00:5f:c0:60 in network mk-newest-cni-468999
	I1024 19:55:18.658731   63363 main.go:141] libmachine: (newest-cni-468999) DBG | unable to find current IP address of domain newest-cni-468999 in network mk-newest-cni-468999
	I1024 19:55:18.658768   63363 main.go:141] libmachine: (newest-cni-468999) DBG | I1024 19:55:18.658691   63408 retry.go:31] will retry after 521.728216ms: waiting for machine to come up
	I1024 19:55:19.182530   63363 main.go:141] libmachine: (newest-cni-468999) DBG | domain newest-cni-468999 has defined MAC address 52:54:00:5f:c0:60 in network mk-newest-cni-468999
	I1024 19:55:19.183112   63363 main.go:141] libmachine: (newest-cni-468999) DBG | unable to find current IP address of domain newest-cni-468999 in network mk-newest-cni-468999
	I1024 19:55:19.183138   63363 main.go:141] libmachine: (newest-cni-468999) DBG | I1024 19:55:19.183099   63408 retry.go:31] will retry after 535.675936ms: waiting for machine to come up
	I1024 19:55:19.720599   63363 main.go:141] libmachine: (newest-cni-468999) DBG | domain newest-cni-468999 has defined MAC address 52:54:00:5f:c0:60 in network mk-newest-cni-468999
	I1024 19:55:19.721122   63363 main.go:141] libmachine: (newest-cni-468999) DBG | unable to find current IP address of domain newest-cni-468999 in network mk-newest-cni-468999
	I1024 19:55:19.721146   63363 main.go:141] libmachine: (newest-cni-468999) DBG | I1024 19:55:19.721083   63408 retry.go:31] will retry after 607.80452ms: waiting for machine to come up
	I1024 19:55:20.330928   63363 main.go:141] libmachine: (newest-cni-468999) DBG | domain newest-cni-468999 has defined MAC address 52:54:00:5f:c0:60 in network mk-newest-cni-468999
	I1024 19:55:20.331398   63363 main.go:141] libmachine: (newest-cni-468999) DBG | unable to find current IP address of domain newest-cni-468999 in network mk-newest-cni-468999
	I1024 19:55:20.331422   63363 main.go:141] libmachine: (newest-cni-468999) DBG | I1024 19:55:20.331351   63408 retry.go:31] will retry after 1.020881825s: waiting for machine to come up
	I1024 19:55:17.105315   61522 pod_ready.go:102] pod "metrics-server-74d5856cc6-4pkrg" in "kube-system" namespace has status "Ready":"False"
	I1024 19:55:19.110168   61522 pod_ready.go:102] pod "metrics-server-74d5856cc6-4pkrg" in "kube-system" namespace has status "Ready":"False"
	I1024 19:55:21.605651   61522 pod_ready.go:102] pod "metrics-server-74d5856cc6-4pkrg" in "kube-system" namespace has status "Ready":"False"
	I1024 19:55:19.883676   61295 pod_ready.go:102] pod "metrics-server-57f55c9bc5-n9drb" in "kube-system" namespace has status "Ready":"False"
	I1024 19:55:21.884549   61295 pod_ready.go:102] pod "metrics-server-57f55c9bc5-n9drb" in "kube-system" namespace has status "Ready":"False"
	I1024 19:55:21.087168   61871 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gsn2q" in "kube-system" namespace has status "Ready":"False"
	I1024 19:55:23.089276   61871 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gsn2q" in "kube-system" namespace has status "Ready":"False"
	I1024 19:55:21.353534   63363 main.go:141] libmachine: (newest-cni-468999) DBG | domain newest-cni-468999 has defined MAC address 52:54:00:5f:c0:60 in network mk-newest-cni-468999
	I1024 19:55:21.354016   63363 main.go:141] libmachine: (newest-cni-468999) DBG | unable to find current IP address of domain newest-cni-468999 in network mk-newest-cni-468999
	I1024 19:55:21.354055   63363 main.go:141] libmachine: (newest-cni-468999) DBG | I1024 19:55:21.353953   63408 retry.go:31] will retry after 1.05107715s: waiting for machine to come up
	I1024 19:55:22.406261   63363 main.go:141] libmachine: (newest-cni-468999) DBG | domain newest-cni-468999 has defined MAC address 52:54:00:5f:c0:60 in network mk-newest-cni-468999
	I1024 19:55:22.406705   63363 main.go:141] libmachine: (newest-cni-468999) DBG | unable to find current IP address of domain newest-cni-468999 in network mk-newest-cni-468999
	I1024 19:55:22.406732   63363 main.go:141] libmachine: (newest-cni-468999) DBG | I1024 19:55:22.406673   63408 retry.go:31] will retry after 1.781277754s: waiting for machine to come up
	I1024 19:55:24.189106   63363 main.go:141] libmachine: (newest-cni-468999) DBG | domain newest-cni-468999 has defined MAC address 52:54:00:5f:c0:60 in network mk-newest-cni-468999
	I1024 19:55:24.189559   63363 main.go:141] libmachine: (newest-cni-468999) DBG | unable to find current IP address of domain newest-cni-468999 in network mk-newest-cni-468999
	I1024 19:55:24.189602   63363 main.go:141] libmachine: (newest-cni-468999) DBG | I1024 19:55:24.189534   63408 retry.go:31] will retry after 2.074485394s: waiting for machine to come up
	I1024 19:55:26.266221   63363 main.go:141] libmachine: (newest-cni-468999) DBG | domain newest-cni-468999 has defined MAC address 52:54:00:5f:c0:60 in network mk-newest-cni-468999
	I1024 19:55:26.266922   63363 main.go:141] libmachine: (newest-cni-468999) DBG | unable to find current IP address of domain newest-cni-468999 in network mk-newest-cni-468999
	I1024 19:55:26.266952   63363 main.go:141] libmachine: (newest-cni-468999) DBG | I1024 19:55:26.266825   63408 retry.go:31] will retry after 1.968134738s: waiting for machine to come up
	I1024 19:55:23.607260   61522 pod_ready.go:102] pod "metrics-server-74d5856cc6-4pkrg" in "kube-system" namespace has status "Ready":"False"
	I1024 19:55:26.105049   61522 pod_ready.go:102] pod "metrics-server-74d5856cc6-4pkrg" in "kube-system" namespace has status "Ready":"False"
	I1024 19:55:24.383064   61295 pod_ready.go:102] pod "metrics-server-57f55c9bc5-n9drb" in "kube-system" namespace has status "Ready":"False"
	I1024 19:55:26.886681   61295 pod_ready.go:102] pod "metrics-server-57f55c9bc5-n9drb" in "kube-system" namespace has status "Ready":"False"
	I1024 19:55:25.587022   61871 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gsn2q" in "kube-system" namespace has status "Ready":"False"
	I1024 19:55:27.589445   61871 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gsn2q" in "kube-system" namespace has status "Ready":"False"
	I1024 19:55:28.236354   63363 main.go:141] libmachine: (newest-cni-468999) DBG | domain newest-cni-468999 has defined MAC address 52:54:00:5f:c0:60 in network mk-newest-cni-468999
	I1024 19:55:28.236833   63363 main.go:141] libmachine: (newest-cni-468999) DBG | unable to find current IP address of domain newest-cni-468999 in network mk-newest-cni-468999
	I1024 19:55:28.236864   63363 main.go:141] libmachine: (newest-cni-468999) DBG | I1024 19:55:28.236776   63408 retry.go:31] will retry after 2.740030602s: waiting for machine to come up
	I1024 19:55:30.979823   63363 main.go:141] libmachine: (newest-cni-468999) DBG | domain newest-cni-468999 has defined MAC address 52:54:00:5f:c0:60 in network mk-newest-cni-468999
	I1024 19:55:30.980387   63363 main.go:141] libmachine: (newest-cni-468999) DBG | unable to find current IP address of domain newest-cni-468999 in network mk-newest-cni-468999
	I1024 19:55:30.980422   63363 main.go:141] libmachine: (newest-cni-468999) DBG | I1024 19:55:30.980327   63408 retry.go:31] will retry after 3.508658903s: waiting for machine to come up
	I1024 19:55:28.105398   61522 pod_ready.go:102] pod "metrics-server-74d5856cc6-4pkrg" in "kube-system" namespace has status "Ready":"False"
	I1024 19:55:30.106163   61522 pod_ready.go:102] pod "metrics-server-74d5856cc6-4pkrg" in "kube-system" namespace has status "Ready":"False"
	I1024 19:55:29.382180   61295 pod_ready.go:102] pod "metrics-server-57f55c9bc5-n9drb" in "kube-system" namespace has status "Ready":"False"
	I1024 19:55:31.382313   61295 pod_ready.go:102] pod "metrics-server-57f55c9bc5-n9drb" in "kube-system" namespace has status "Ready":"False"
	I1024 19:55:30.090750   61871 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gsn2q" in "kube-system" namespace has status "Ready":"False"
	I1024 19:55:32.592849   61871 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gsn2q" in "kube-system" namespace has status "Ready":"False"
	I1024 19:55:34.492437   63363 main.go:141] libmachine: (newest-cni-468999) DBG | domain newest-cni-468999 has defined MAC address 52:54:00:5f:c0:60 in network mk-newest-cni-468999
	I1024 19:55:34.492992   63363 main.go:141] libmachine: (newest-cni-468999) DBG | unable to find current IP address of domain newest-cni-468999 in network mk-newest-cni-468999
	I1024 19:55:34.493015   63363 main.go:141] libmachine: (newest-cni-468999) DBG | I1024 19:55:34.492932   63408 retry.go:31] will retry after 3.879976868s: waiting for machine to come up
	I1024 19:55:32.609028   61522 pod_ready.go:102] pod "metrics-server-74d5856cc6-4pkrg" in "kube-system" namespace has status "Ready":"False"
	I1024 19:55:35.104522   61522 pod_ready.go:102] pod "metrics-server-74d5856cc6-4pkrg" in "kube-system" namespace has status "Ready":"False"
	I1024 19:55:33.383806   61295 pod_ready.go:102] pod "metrics-server-57f55c9bc5-n9drb" in "kube-system" namespace has status "Ready":"False"
	I1024 19:55:35.883418   61295 pod_ready.go:102] pod "metrics-server-57f55c9bc5-n9drb" in "kube-system" namespace has status "Ready":"False"
	I1024 19:55:35.086724   61871 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gsn2q" in "kube-system" namespace has status "Ready":"False"
	I1024 19:55:37.087213   61871 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gsn2q" in "kube-system" namespace has status "Ready":"False"
	I1024 19:55:38.375085   63363 main.go:141] libmachine: (newest-cni-468999) DBG | domain newest-cni-468999 has defined MAC address 52:54:00:5f:c0:60 in network mk-newest-cni-468999
	I1024 19:55:38.375811   63363 main.go:141] libmachine: (newest-cni-468999) Found IP for machine: 192.168.61.68
	I1024 19:55:38.375836   63363 main.go:141] libmachine: (newest-cni-468999) Reserving static IP address...
	I1024 19:55:38.375850   63363 main.go:141] libmachine: (newest-cni-468999) DBG | domain newest-cni-468999 has current primary IP address 192.168.61.68 and MAC address 52:54:00:5f:c0:60 in network mk-newest-cni-468999
	I1024 19:55:38.376254   63363 main.go:141] libmachine: (newest-cni-468999) DBG | found host DHCP lease matching {name: "newest-cni-468999", mac: "52:54:00:5f:c0:60", ip: "192.168.61.68"} in network mk-newest-cni-468999: {Iface:virbr3 ExpiryTime:2023-10-24 20:55:29 +0000 UTC Type:0 Mac:52:54:00:5f:c0:60 Iaid: IPaddr:192.168.61.68 Prefix:24 Hostname:newest-cni-468999 Clientid:01:52:54:00:5f:c0:60}
	I1024 19:55:38.376311   63363 main.go:141] libmachine: (newest-cni-468999) DBG | skip adding static IP to network mk-newest-cni-468999 - found existing host DHCP lease matching {name: "newest-cni-468999", mac: "52:54:00:5f:c0:60", ip: "192.168.61.68"}
	I1024 19:55:38.376323   63363 main.go:141] libmachine: (newest-cni-468999) Reserved static IP address: 192.168.61.68
	I1024 19:55:38.376345   63363 main.go:141] libmachine: (newest-cni-468999) Waiting for SSH to be available...
	I1024 19:55:38.376362   63363 main.go:141] libmachine: (newest-cni-468999) DBG | Getting to WaitForSSH function...
	I1024 19:55:38.378657   63363 main.go:141] libmachine: (newest-cni-468999) DBG | domain newest-cni-468999 has defined MAC address 52:54:00:5f:c0:60 in network mk-newest-cni-468999
	I1024 19:55:38.379128   63363 main.go:141] libmachine: (newest-cni-468999) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:c0:60", ip: ""} in network mk-newest-cni-468999: {Iface:virbr3 ExpiryTime:2023-10-24 20:55:29 +0000 UTC Type:0 Mac:52:54:00:5f:c0:60 Iaid: IPaddr:192.168.61.68 Prefix:24 Hostname:newest-cni-468999 Clientid:01:52:54:00:5f:c0:60}
	I1024 19:55:38.379172   63363 main.go:141] libmachine: (newest-cni-468999) DBG | domain newest-cni-468999 has defined IP address 192.168.61.68 and MAC address 52:54:00:5f:c0:60 in network mk-newest-cni-468999
	I1024 19:55:38.379521   63363 main.go:141] libmachine: (newest-cni-468999) DBG | Using SSH client type: external
	I1024 19:55:38.379551   63363 main.go:141] libmachine: (newest-cni-468999) DBG | Using SSH private key: /home/jenkins/minikube-integration/17485-9104/.minikube/machines/newest-cni-468999/id_rsa (-rw-------)
	I1024 19:55:38.379590   63363 main.go:141] libmachine: (newest-cni-468999) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.68 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17485-9104/.minikube/machines/newest-cni-468999/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1024 19:55:38.379610   63363 main.go:141] libmachine: (newest-cni-468999) DBG | About to run SSH command:
	I1024 19:55:38.379651   63363 main.go:141] libmachine: (newest-cni-468999) DBG | exit 0
	I1024 19:55:38.477860   63363 main.go:141] libmachine: (newest-cni-468999) DBG | SSH cmd err, output: <nil>: 
	I1024 19:55:38.478236   63363 main.go:141] libmachine: (newest-cni-468999) Calling .GetConfigRaw
	I1024 19:55:38.478818   63363 main.go:141] libmachine: (newest-cni-468999) Calling .GetIP
	I1024 19:55:38.481649   63363 main.go:141] libmachine: (newest-cni-468999) DBG | domain newest-cni-468999 has defined MAC address 52:54:00:5f:c0:60 in network mk-newest-cni-468999
	I1024 19:55:38.482038   63363 main.go:141] libmachine: (newest-cni-468999) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:c0:60", ip: ""} in network mk-newest-cni-468999: {Iface:virbr3 ExpiryTime:2023-10-24 20:55:29 +0000 UTC Type:0 Mac:52:54:00:5f:c0:60 Iaid: IPaddr:192.168.61.68 Prefix:24 Hostname:newest-cni-468999 Clientid:01:52:54:00:5f:c0:60}
	I1024 19:55:38.482099   63363 main.go:141] libmachine: (newest-cni-468999) DBG | domain newest-cni-468999 has defined IP address 192.168.61.68 and MAC address 52:54:00:5f:c0:60 in network mk-newest-cni-468999
	I1024 19:55:38.482267   63363 profile.go:148] Saving config to /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/newest-cni-468999/config.json ...
	I1024 19:55:38.482464   63363 machine.go:88] provisioning docker machine ...
	I1024 19:55:38.482485   63363 main.go:141] libmachine: (newest-cni-468999) Calling .DriverName
	I1024 19:55:38.482704   63363 main.go:141] libmachine: (newest-cni-468999) Calling .GetMachineName
	I1024 19:55:38.482884   63363 buildroot.go:166] provisioning hostname "newest-cni-468999"
	I1024 19:55:38.482907   63363 main.go:141] libmachine: (newest-cni-468999) Calling .GetMachineName
	I1024 19:55:38.483067   63363 main.go:141] libmachine: (newest-cni-468999) Calling .GetSSHHostname
	I1024 19:55:38.485201   63363 main.go:141] libmachine: (newest-cni-468999) DBG | domain newest-cni-468999 has defined MAC address 52:54:00:5f:c0:60 in network mk-newest-cni-468999
	I1024 19:55:38.485585   63363 main.go:141] libmachine: (newest-cni-468999) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:c0:60", ip: ""} in network mk-newest-cni-468999: {Iface:virbr3 ExpiryTime:2023-10-24 20:55:29 +0000 UTC Type:0 Mac:52:54:00:5f:c0:60 Iaid: IPaddr:192.168.61.68 Prefix:24 Hostname:newest-cni-468999 Clientid:01:52:54:00:5f:c0:60}
	I1024 19:55:38.485616   63363 main.go:141] libmachine: (newest-cni-468999) DBG | domain newest-cni-468999 has defined IP address 192.168.61.68 and MAC address 52:54:00:5f:c0:60 in network mk-newest-cni-468999
	I1024 19:55:38.485751   63363 main.go:141] libmachine: (newest-cni-468999) Calling .GetSSHPort
	I1024 19:55:38.485934   63363 main.go:141] libmachine: (newest-cni-468999) Calling .GetSSHKeyPath
	I1024 19:55:38.486112   63363 main.go:141] libmachine: (newest-cni-468999) Calling .GetSSHKeyPath
	I1024 19:55:38.486285   63363 main.go:141] libmachine: (newest-cni-468999) Calling .GetSSHUsername
	I1024 19:55:38.486491   63363 main.go:141] libmachine: Using SSH client type: native
	I1024 19:55:38.487004   63363 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.61.68 22 <nil> <nil>}
	I1024 19:55:38.487026   63363 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-468999 && echo "newest-cni-468999" | sudo tee /etc/hostname
	I1024 19:55:38.631557   63363 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-468999
	
	I1024 19:55:38.631590   63363 main.go:141] libmachine: (newest-cni-468999) Calling .GetSSHHostname
	I1024 19:55:38.634548   63363 main.go:141] libmachine: (newest-cni-468999) DBG | domain newest-cni-468999 has defined MAC address 52:54:00:5f:c0:60 in network mk-newest-cni-468999
	I1024 19:55:38.634956   63363 main.go:141] libmachine: (newest-cni-468999) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:c0:60", ip: ""} in network mk-newest-cni-468999: {Iface:virbr3 ExpiryTime:2023-10-24 20:55:29 +0000 UTC Type:0 Mac:52:54:00:5f:c0:60 Iaid: IPaddr:192.168.61.68 Prefix:24 Hostname:newest-cni-468999 Clientid:01:52:54:00:5f:c0:60}
	I1024 19:55:38.634991   63363 main.go:141] libmachine: (newest-cni-468999) DBG | domain newest-cni-468999 has defined IP address 192.168.61.68 and MAC address 52:54:00:5f:c0:60 in network mk-newest-cni-468999
	I1024 19:55:38.635125   63363 main.go:141] libmachine: (newest-cni-468999) Calling .GetSSHPort
	I1024 19:55:38.635285   63363 main.go:141] libmachine: (newest-cni-468999) Calling .GetSSHKeyPath
	I1024 19:55:38.635465   63363 main.go:141] libmachine: (newest-cni-468999) Calling .GetSSHKeyPath
	I1024 19:55:38.635591   63363 main.go:141] libmachine: (newest-cni-468999) Calling .GetSSHUsername
	I1024 19:55:38.635771   63363 main.go:141] libmachine: Using SSH client type: native
	I1024 19:55:38.636137   63363 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.61.68 22 <nil> <nil>}
	I1024 19:55:38.636157   63363 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-468999' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-468999/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-468999' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1024 19:55:38.775866   63363 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1024 19:55:38.775903   63363 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17485-9104/.minikube CaCertPath:/home/jenkins/minikube-integration/17485-9104/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17485-9104/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17485-9104/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17485-9104/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17485-9104/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17485-9104/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17485-9104/.minikube}
	I1024 19:55:38.775927   63363 buildroot.go:174] setting up certificates
	I1024 19:55:38.775938   63363 provision.go:83] configureAuth start
	I1024 19:55:38.775950   63363 main.go:141] libmachine: (newest-cni-468999) Calling .GetMachineName
	I1024 19:55:38.776241   63363 main.go:141] libmachine: (newest-cni-468999) Calling .GetIP
	I1024 19:55:38.778847   63363 main.go:141] libmachine: (newest-cni-468999) DBG | domain newest-cni-468999 has defined MAC address 52:54:00:5f:c0:60 in network mk-newest-cni-468999
	I1024 19:55:38.779281   63363 main.go:141] libmachine: (newest-cni-468999) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:c0:60", ip: ""} in network mk-newest-cni-468999: {Iface:virbr3 ExpiryTime:2023-10-24 20:55:29 +0000 UTC Type:0 Mac:52:54:00:5f:c0:60 Iaid: IPaddr:192.168.61.68 Prefix:24 Hostname:newest-cni-468999 Clientid:01:52:54:00:5f:c0:60}
	I1024 19:55:38.779312   63363 main.go:141] libmachine: (newest-cni-468999) DBG | domain newest-cni-468999 has defined IP address 192.168.61.68 and MAC address 52:54:00:5f:c0:60 in network mk-newest-cni-468999
	I1024 19:55:38.779484   63363 main.go:141] libmachine: (newest-cni-468999) Calling .GetSSHHostname
	I1024 19:55:38.781516   63363 main.go:141] libmachine: (newest-cni-468999) DBG | domain newest-cni-468999 has defined MAC address 52:54:00:5f:c0:60 in network mk-newest-cni-468999
	I1024 19:55:38.781785   63363 main.go:141] libmachine: (newest-cni-468999) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:c0:60", ip: ""} in network mk-newest-cni-468999: {Iface:virbr3 ExpiryTime:2023-10-24 20:55:29 +0000 UTC Type:0 Mac:52:54:00:5f:c0:60 Iaid: IPaddr:192.168.61.68 Prefix:24 Hostname:newest-cni-468999 Clientid:01:52:54:00:5f:c0:60}
	I1024 19:55:38.781822   63363 main.go:141] libmachine: (newest-cni-468999) DBG | domain newest-cni-468999 has defined IP address 192.168.61.68 and MAC address 52:54:00:5f:c0:60 in network mk-newest-cni-468999
	I1024 19:55:38.781942   63363 provision.go:138] copyHostCerts
	I1024 19:55:38.782005   63363 exec_runner.go:144] found /home/jenkins/minikube-integration/17485-9104/.minikube/ca.pem, removing ...
	I1024 19:55:38.782020   63363 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17485-9104/.minikube/ca.pem
	I1024 19:55:38.782118   63363 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17485-9104/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17485-9104/.minikube/ca.pem (1082 bytes)
	I1024 19:55:38.782223   63363 exec_runner.go:144] found /home/jenkins/minikube-integration/17485-9104/.minikube/cert.pem, removing ...
	I1024 19:55:38.782233   63363 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17485-9104/.minikube/cert.pem
	I1024 19:55:38.782259   63363 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17485-9104/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17485-9104/.minikube/cert.pem (1123 bytes)
	I1024 19:55:38.782313   63363 exec_runner.go:144] found /home/jenkins/minikube-integration/17485-9104/.minikube/key.pem, removing ...
	I1024 19:55:38.782320   63363 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17485-9104/.minikube/key.pem
	I1024 19:55:38.782339   63363 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17485-9104/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17485-9104/.minikube/key.pem (1675 bytes)
	I1024 19:55:38.782382   63363 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17485-9104/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17485-9104/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17485-9104/.minikube/certs/ca-key.pem org=jenkins.newest-cni-468999 san=[192.168.61.68 192.168.61.68 localhost 127.0.0.1 minikube newest-cni-468999]
	I1024 19:55:38.926248   63363 provision.go:172] copyRemoteCerts
	I1024 19:55:38.926303   63363 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1024 19:55:38.926326   63363 main.go:141] libmachine: (newest-cni-468999) Calling .GetSSHHostname
	I1024 19:55:38.929164   63363 main.go:141] libmachine: (newest-cni-468999) DBG | domain newest-cni-468999 has defined MAC address 52:54:00:5f:c0:60 in network mk-newest-cni-468999
	I1024 19:55:38.929455   63363 main.go:141] libmachine: (newest-cni-468999) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:c0:60", ip: ""} in network mk-newest-cni-468999: {Iface:virbr3 ExpiryTime:2023-10-24 20:55:29 +0000 UTC Type:0 Mac:52:54:00:5f:c0:60 Iaid: IPaddr:192.168.61.68 Prefix:24 Hostname:newest-cni-468999 Clientid:01:52:54:00:5f:c0:60}
	I1024 19:55:38.929483   63363 main.go:141] libmachine: (newest-cni-468999) DBG | domain newest-cni-468999 has defined IP address 192.168.61.68 and MAC address 52:54:00:5f:c0:60 in network mk-newest-cni-468999
	I1024 19:55:38.929632   63363 main.go:141] libmachine: (newest-cni-468999) Calling .GetSSHPort
	I1024 19:55:38.929851   63363 main.go:141] libmachine: (newest-cni-468999) Calling .GetSSHKeyPath
	I1024 19:55:38.929995   63363 main.go:141] libmachine: (newest-cni-468999) Calling .GetSSHUsername
	I1024 19:55:38.930152   63363 sshutil.go:53] new ssh client: &{IP:192.168.61.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9104/.minikube/machines/newest-cni-468999/id_rsa Username:docker}
	I1024 19:55:39.024032   63363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9104/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1024 19:55:39.047513   63363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9104/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1024 19:55:39.070826   63363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9104/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1024 19:55:39.095926   63363 provision.go:86] duration metric: configureAuth took 319.970128ms
	I1024 19:55:39.095950   63363 buildroot.go:189] setting minikube options for container-runtime
	I1024 19:55:39.096149   63363 config.go:182] Loaded profile config "newest-cni-468999": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1024 19:55:39.096184   63363 main.go:141] libmachine: (newest-cni-468999) Calling .DriverName
	I1024 19:55:39.096444   63363 main.go:141] libmachine: (newest-cni-468999) Calling .GetSSHHostname
	I1024 19:55:39.099314   63363 main.go:141] libmachine: (newest-cni-468999) DBG | domain newest-cni-468999 has defined MAC address 52:54:00:5f:c0:60 in network mk-newest-cni-468999
	I1024 19:55:39.099622   63363 main.go:141] libmachine: (newest-cni-468999) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:c0:60", ip: ""} in network mk-newest-cni-468999: {Iface:virbr3 ExpiryTime:2023-10-24 20:55:29 +0000 UTC Type:0 Mac:52:54:00:5f:c0:60 Iaid: IPaddr:192.168.61.68 Prefix:24 Hostname:newest-cni-468999 Clientid:01:52:54:00:5f:c0:60}
	I1024 19:55:39.099657   63363 main.go:141] libmachine: (newest-cni-468999) DBG | domain newest-cni-468999 has defined IP address 192.168.61.68 and MAC address 52:54:00:5f:c0:60 in network mk-newest-cni-468999
	I1024 19:55:39.099799   63363 main.go:141] libmachine: (newest-cni-468999) Calling .GetSSHPort
	I1024 19:55:39.099994   63363 main.go:141] libmachine: (newest-cni-468999) Calling .GetSSHKeyPath
	I1024 19:55:39.100166   63363 main.go:141] libmachine: (newest-cni-468999) Calling .GetSSHKeyPath
	I1024 19:55:39.100293   63363 main.go:141] libmachine: (newest-cni-468999) Calling .GetSSHUsername
	I1024 19:55:39.100473   63363 main.go:141] libmachine: Using SSH client type: native
	I1024 19:55:39.100815   63363 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.61.68 22 <nil> <nil>}
	I1024 19:55:39.100835   63363 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1024 19:55:39.235951   63363 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1024 19:55:39.235980   63363 buildroot.go:70] root file system type: tmpfs
	I1024 19:55:39.236111   63363 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1024 19:55:39.236140   63363 main.go:141] libmachine: (newest-cni-468999) Calling .GetSSHHostname
	I1024 19:55:39.238914   63363 main.go:141] libmachine: (newest-cni-468999) DBG | domain newest-cni-468999 has defined MAC address 52:54:00:5f:c0:60 in network mk-newest-cni-468999
	I1024 19:55:39.239340   63363 main.go:141] libmachine: (newest-cni-468999) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:c0:60", ip: ""} in network mk-newest-cni-468999: {Iface:virbr3 ExpiryTime:2023-10-24 20:55:29 +0000 UTC Type:0 Mac:52:54:00:5f:c0:60 Iaid: IPaddr:192.168.61.68 Prefix:24 Hostname:newest-cni-468999 Clientid:01:52:54:00:5f:c0:60}
	I1024 19:55:39.239372   63363 main.go:141] libmachine: (newest-cni-468999) DBG | domain newest-cni-468999 has defined IP address 192.168.61.68 and MAC address 52:54:00:5f:c0:60 in network mk-newest-cni-468999
	I1024 19:55:39.239594   63363 main.go:141] libmachine: (newest-cni-468999) Calling .GetSSHPort
	I1024 19:55:39.239797   63363 main.go:141] libmachine: (newest-cni-468999) Calling .GetSSHKeyPath
	I1024 19:55:39.239971   63363 main.go:141] libmachine: (newest-cni-468999) Calling .GetSSHKeyPath
	I1024 19:55:39.240122   63363 main.go:141] libmachine: (newest-cni-468999) Calling .GetSSHUsername
	I1024 19:55:39.240319   63363 main.go:141] libmachine: Using SSH client type: native
	I1024 19:55:39.240680   63363 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.61.68 22 <nil> <nil>}
	I1024 19:55:39.240757   63363 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1024 19:55:39.392078   63363 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1024 19:55:39.392138   63363 main.go:141] libmachine: (newest-cni-468999) Calling .GetSSHHostname
	I1024 19:55:39.394866   63363 main.go:141] libmachine: (newest-cni-468999) DBG | domain newest-cni-468999 has defined MAC address 52:54:00:5f:c0:60 in network mk-newest-cni-468999
	I1024 19:55:39.395241   63363 main.go:141] libmachine: (newest-cni-468999) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:c0:60", ip: ""} in network mk-newest-cni-468999: {Iface:virbr3 ExpiryTime:2023-10-24 20:55:29 +0000 UTC Type:0 Mac:52:54:00:5f:c0:60 Iaid: IPaddr:192.168.61.68 Prefix:24 Hostname:newest-cni-468999 Clientid:01:52:54:00:5f:c0:60}
	I1024 19:55:39.395277   63363 main.go:141] libmachine: (newest-cni-468999) DBG | domain newest-cni-468999 has defined IP address 192.168.61.68 and MAC address 52:54:00:5f:c0:60 in network mk-newest-cni-468999
	I1024 19:55:39.395426   63363 main.go:141] libmachine: (newest-cni-468999) Calling .GetSSHPort
	I1024 19:55:39.395599   63363 main.go:141] libmachine: (newest-cni-468999) Calling .GetSSHKeyPath
	I1024 19:55:39.395764   63363 main.go:141] libmachine: (newest-cni-468999) Calling .GetSSHKeyPath
	I1024 19:55:39.395909   63363 main.go:141] libmachine: (newest-cni-468999) Calling .GetSSHUsername
	I1024 19:55:39.396094   63363 main.go:141] libmachine: Using SSH client type: native
	I1024 19:55:39.396403   63363 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.61.68 22 <nil> <nil>}
	I1024 19:55:39.396419   63363 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1024 19:55:40.372188   63363 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1024 19:55:40.372222   63363 machine.go:91] provisioned docker machine in 1.889742008s
	I1024 19:55:40.372235   63363 start.go:300] post-start starting for "newest-cni-468999" (driver="kvm2")
	I1024 19:55:40.372247   63363 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1024 19:55:40.372267   63363 main.go:141] libmachine: (newest-cni-468999) Calling .DriverName
	I1024 19:55:40.372604   63363 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1024 19:55:40.372643   63363 main.go:141] libmachine: (newest-cni-468999) Calling .GetSSHHostname
	I1024 19:55:40.375202   63363 main.go:141] libmachine: (newest-cni-468999) DBG | domain newest-cni-468999 has defined MAC address 52:54:00:5f:c0:60 in network mk-newest-cni-468999
	I1024 19:55:40.375602   63363 main.go:141] libmachine: (newest-cni-468999) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:c0:60", ip: ""} in network mk-newest-cni-468999: {Iface:virbr3 ExpiryTime:2023-10-24 20:55:29 +0000 UTC Type:0 Mac:52:54:00:5f:c0:60 Iaid: IPaddr:192.168.61.68 Prefix:24 Hostname:newest-cni-468999 Clientid:01:52:54:00:5f:c0:60}
	I1024 19:55:40.375634   63363 main.go:141] libmachine: (newest-cni-468999) DBG | domain newest-cni-468999 has defined IP address 192.168.61.68 and MAC address 52:54:00:5f:c0:60 in network mk-newest-cni-468999
	I1024 19:55:40.375784   63363 main.go:141] libmachine: (newest-cni-468999) Calling .GetSSHPort
	I1024 19:55:40.375979   63363 main.go:141] libmachine: (newest-cni-468999) Calling .GetSSHKeyPath
	I1024 19:55:40.376120   63363 main.go:141] libmachine: (newest-cni-468999) Calling .GetSSHUsername
	I1024 19:55:40.376271   63363 sshutil.go:53] new ssh client: &{IP:192.168.61.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9104/.minikube/machines/newest-cni-468999/id_rsa Username:docker}
	I1024 19:55:40.476703   63363 ssh_runner.go:195] Run: cat /etc/os-release
	I1024 19:55:40.481607   63363 info.go:137] Remote host: Buildroot 2021.02.12
	I1024 19:55:40.481632   63363 filesync.go:126] Scanning /home/jenkins/minikube-integration/17485-9104/.minikube/addons for local assets ...
	I1024 19:55:40.481691   63363 filesync.go:126] Scanning /home/jenkins/minikube-integration/17485-9104/.minikube/files for local assets ...
	I1024 19:55:40.481760   63363 filesync.go:149] local asset: /home/jenkins/minikube-integration/17485-9104/.minikube/files/etc/ssl/certs/164432.pem -> 164432.pem in /etc/ssl/certs
	I1024 19:55:40.481838   63363 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1024 19:55:40.491008   63363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9104/.minikube/files/etc/ssl/certs/164432.pem --> /etc/ssl/certs/164432.pem (1708 bytes)
	I1024 19:55:40.514840   63363 start.go:303] post-start completed in 142.591149ms
	I1024 19:55:40.514861   63363 fix.go:56] fixHost completed within 24.120199249s
	I1024 19:55:40.514886   63363 main.go:141] libmachine: (newest-cni-468999) Calling .GetSSHHostname
	I1024 19:55:40.517611   63363 main.go:141] libmachine: (newest-cni-468999) DBG | domain newest-cni-468999 has defined MAC address 52:54:00:5f:c0:60 in network mk-newest-cni-468999
	I1024 19:55:40.517950   63363 main.go:141] libmachine: (newest-cni-468999) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:c0:60", ip: ""} in network mk-newest-cni-468999: {Iface:virbr3 ExpiryTime:2023-10-24 20:55:29 +0000 UTC Type:0 Mac:52:54:00:5f:c0:60 Iaid: IPaddr:192.168.61.68 Prefix:24 Hostname:newest-cni-468999 Clientid:01:52:54:00:5f:c0:60}
	I1024 19:55:40.517980   63363 main.go:141] libmachine: (newest-cni-468999) DBG | domain newest-cni-468999 has defined IP address 192.168.61.68 and MAC address 52:54:00:5f:c0:60 in network mk-newest-cni-468999
	I1024 19:55:40.518239   63363 main.go:141] libmachine: (newest-cni-468999) Calling .GetSSHPort
	I1024 19:55:40.518448   63363 main.go:141] libmachine: (newest-cni-468999) Calling .GetSSHKeyPath
	I1024 19:55:40.518596   63363 main.go:141] libmachine: (newest-cni-468999) Calling .GetSSHKeyPath
	I1024 19:55:40.518746   63363 main.go:141] libmachine: (newest-cni-468999) Calling .GetSSHUsername
	I1024 19:55:40.518937   63363 main.go:141] libmachine: Using SSH client type: native
	I1024 19:55:40.519286   63363 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.61.68 22 <nil> <nil>}
	I1024 19:55:40.519298   63363 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1024 19:55:40.651218   63363 main.go:141] libmachine: SSH cmd err, output: <nil>: 1698177340.595328085
	
	I1024 19:55:40.651253   63363 fix.go:206] guest clock: 1698177340.595328085
	I1024 19:55:40.651263   63363 fix.go:219] Guest: 2023-10-24 19:55:40.595328085 +0000 UTC Remote: 2023-10-24 19:55:40.514867186 +0000 UTC m=+24.286623844 (delta=80.460899ms)
	I1024 19:55:40.651320   63363 fix.go:190] guest clock delta is within tolerance: 80.460899ms
	I1024 19:55:40.651328   63363 start.go:83] releasing machines lock for "newest-cni-468999", held for 24.256675509s
	I1024 19:55:40.651357   63363 main.go:141] libmachine: (newest-cni-468999) Calling .DriverName
	I1024 19:55:40.651650   63363 main.go:141] libmachine: (newest-cni-468999) Calling .GetIP
	I1024 19:55:40.654486   63363 main.go:141] libmachine: (newest-cni-468999) DBG | domain newest-cni-468999 has defined MAC address 52:54:00:5f:c0:60 in network mk-newest-cni-468999
	I1024 19:55:40.654872   63363 main.go:141] libmachine: (newest-cni-468999) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:c0:60", ip: ""} in network mk-newest-cni-468999: {Iface:virbr3 ExpiryTime:2023-10-24 20:55:29 +0000 UTC Type:0 Mac:52:54:00:5f:c0:60 Iaid: IPaddr:192.168.61.68 Prefix:24 Hostname:newest-cni-468999 Clientid:01:52:54:00:5f:c0:60}
	I1024 19:55:40.654905   63363 main.go:141] libmachine: (newest-cni-468999) DBG | domain newest-cni-468999 has defined IP address 192.168.61.68 and MAC address 52:54:00:5f:c0:60 in network mk-newest-cni-468999
	I1024 19:55:40.655018   63363 main.go:141] libmachine: (newest-cni-468999) Calling .DriverName
	I1024 19:55:40.655539   63363 main.go:141] libmachine: (newest-cni-468999) Calling .DriverName
	I1024 19:55:40.655721   63363 main.go:141] libmachine: (newest-cni-468999) Calling .DriverName
	I1024 19:55:40.655836   63363 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1024 19:55:40.655877   63363 main.go:141] libmachine: (newest-cni-468999) Calling .GetSSHHostname
	I1024 19:55:40.655973   63363 ssh_runner.go:195] Run: cat /version.json
	I1024 19:55:40.656002   63363 main.go:141] libmachine: (newest-cni-468999) Calling .GetSSHHostname
	I1024 19:55:40.658733   63363 main.go:141] libmachine: (newest-cni-468999) DBG | domain newest-cni-468999 has defined MAC address 52:54:00:5f:c0:60 in network mk-newest-cni-468999
	I1024 19:55:40.658761   63363 main.go:141] libmachine: (newest-cni-468999) DBG | domain newest-cni-468999 has defined MAC address 52:54:00:5f:c0:60 in network mk-newest-cni-468999
	I1024 19:55:40.659155   63363 main.go:141] libmachine: (newest-cni-468999) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:c0:60", ip: ""} in network mk-newest-cni-468999: {Iface:virbr3 ExpiryTime:2023-10-24 20:55:29 +0000 UTC Type:0 Mac:52:54:00:5f:c0:60 Iaid: IPaddr:192.168.61.68 Prefix:24 Hostname:newest-cni-468999 Clientid:01:52:54:00:5f:c0:60}
	I1024 19:55:40.659183   63363 main.go:141] libmachine: (newest-cni-468999) DBG | domain newest-cni-468999 has defined IP address 192.168.61.68 and MAC address 52:54:00:5f:c0:60 in network mk-newest-cni-468999
	I1024 19:55:40.659267   63363 main.go:141] libmachine: (newest-cni-468999) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:c0:60", ip: ""} in network mk-newest-cni-468999: {Iface:virbr3 ExpiryTime:2023-10-24 20:55:29 +0000 UTC Type:0 Mac:52:54:00:5f:c0:60 Iaid: IPaddr:192.168.61.68 Prefix:24 Hostname:newest-cni-468999 Clientid:01:52:54:00:5f:c0:60}
	I1024 19:55:40.659295   63363 main.go:141] libmachine: (newest-cni-468999) DBG | domain newest-cni-468999 has defined IP address 192.168.61.68 and MAC address 52:54:00:5f:c0:60 in network mk-newest-cni-468999
	I1024 19:55:40.659321   63363 main.go:141] libmachine: (newest-cni-468999) Calling .GetSSHPort
	I1024 19:55:40.659507   63363 main.go:141] libmachine: (newest-cni-468999) Calling .GetSSHKeyPath
	I1024 19:55:40.659510   63363 main.go:141] libmachine: (newest-cni-468999) Calling .GetSSHPort
	I1024 19:55:40.659690   63363 main.go:141] libmachine: (newest-cni-468999) Calling .GetSSHKeyPath
	I1024 19:55:40.659691   63363 main.go:141] libmachine: (newest-cni-468999) Calling .GetSSHUsername
	I1024 19:55:40.659856   63363 sshutil.go:53] new ssh client: &{IP:192.168.61.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9104/.minikube/machines/newest-cni-468999/id_rsa Username:docker}
	I1024 19:55:40.659872   63363 main.go:141] libmachine: (newest-cni-468999) Calling .GetSSHUsername
	I1024 19:55:40.659999   63363 sshutil.go:53] new ssh client: &{IP:192.168.61.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9104/.minikube/machines/newest-cni-468999/id_rsa Username:docker}
	I1024 19:55:40.776689   63363 ssh_runner.go:195] Run: systemctl --version
	I1024 19:55:40.782886   63363 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1024 19:55:40.788642   63363 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1024 19:55:40.788699   63363 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1024 19:55:40.807726   63363 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1024 19:55:40.807748   63363 start.go:472] detecting cgroup driver to use...
	I1024 19:55:40.807885   63363 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1024 19:55:40.826794   63363 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1024 19:55:40.838926   63363 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1024 19:55:40.850935   63363 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1024 19:55:40.851006   63363 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1024 19:55:40.863015   63363 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1024 19:55:40.875079   63363 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1024 19:55:40.888197   63363 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1024 19:55:40.898782   63363 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1024 19:55:40.913279   63363 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1024 19:55:40.924163   63363 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1024 19:55:40.933003   63363 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1024 19:55:40.942586   63363 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1024 19:55:41.045231   63363 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1024 19:55:41.064376   63363 start.go:472] detecting cgroup driver to use...
	I1024 19:55:41.064461   63363 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1024 19:55:41.078605   63363 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1024 19:55:41.097999   63363 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1024 19:55:41.123028   63363 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1024 19:55:41.136194   63363 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1024 19:55:41.149577   63363 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1024 19:55:41.181330   63363 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1024 19:55:41.194638   63363 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1024 19:55:41.213991   63363 ssh_runner.go:195] Run: which cri-dockerd
	I1024 19:55:41.217824   63363 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1024 19:55:41.227287   63363 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1024 19:55:41.243927   63363 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1024 19:55:37.105342   61522 pod_ready.go:102] pod "metrics-server-74d5856cc6-4pkrg" in "kube-system" namespace has status "Ready":"False"
	I1024 19:55:39.106669   61522 pod_ready.go:102] pod "metrics-server-74d5856cc6-4pkrg" in "kube-system" namespace has status "Ready":"False"
	I1024 19:55:41.606512   61522 pod_ready.go:102] pod "metrics-server-74d5856cc6-4pkrg" in "kube-system" namespace has status "Ready":"False"
	I1024 19:55:38.383741   61295 pod_ready.go:102] pod "metrics-server-57f55c9bc5-n9drb" in "kube-system" namespace has status "Ready":"False"
	I1024 19:55:40.884984   61295 pod_ready.go:102] pod "metrics-server-57f55c9bc5-n9drb" in "kube-system" namespace has status "Ready":"False"
	I1024 19:55:41.358610   63363 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1024 19:55:41.471043   63363 docker.go:555] configuring docker to use "cgroupfs" as cgroup driver...
	I1024 19:55:41.471182   63363 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1024 19:55:41.489257   63363 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1024 19:55:41.605266   63363 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1024 19:55:43.072897   63363 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.467603456s)
	I1024 19:55:43.072953   63363 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1024 19:55:43.181548   63363 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1024 19:55:43.306265   63363 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1024 19:55:43.426648   63363 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1024 19:55:43.543731   63363 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1024 19:55:43.561684   63363 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1024 19:55:43.682050   63363 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I1024 19:55:43.766964   63363 start.go:519] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1024 19:55:43.767026   63363 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1024 19:55:43.772862   63363 start.go:540] Will wait 60s for crictl version
	I1024 19:55:43.772911   63363 ssh_runner.go:195] Run: which crictl
	I1024 19:55:43.777212   63363 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1024 19:55:43.841688   63363 start.go:556] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.6
	RuntimeApiVersion:  v1
	I1024 19:55:43.841764   63363 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1024 19:55:43.870577   63363 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1024 19:55:43.899552   63363 out.go:204] * Preparing Kubernetes v1.28.3 on Docker 24.0.6 ...
	I1024 19:55:43.899600   63363 main.go:141] libmachine: (newest-cni-468999) Calling .GetIP
	I1024 19:55:43.902835   63363 main.go:141] libmachine: (newest-cni-468999) DBG | domain newest-cni-468999 has defined MAC address 52:54:00:5f:c0:60 in network mk-newest-cni-468999
	I1024 19:55:43.903180   63363 main.go:141] libmachine: (newest-cni-468999) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:c0:60", ip: ""} in network mk-newest-cni-468999: {Iface:virbr3 ExpiryTime:2023-10-24 20:55:29 +0000 UTC Type:0 Mac:52:54:00:5f:c0:60 Iaid: IPaddr:192.168.61.68 Prefix:24 Hostname:newest-cni-468999 Clientid:01:52:54:00:5f:c0:60}
	I1024 19:55:43.903213   63363 main.go:141] libmachine: (newest-cni-468999) DBG | domain newest-cni-468999 has defined IP address 192.168.61.68 and MAC address 52:54:00:5f:c0:60 in network mk-newest-cni-468999
	I1024 19:55:43.903532   63363 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1024 19:55:43.907699   63363 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1024 19:55:43.923150   63363 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1024 19:55:39.088231   61871 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gsn2q" in "kube-system" namespace has status "Ready":"False"
	I1024 19:55:41.587899   61871 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gsn2q" in "kube-system" namespace has status "Ready":"False"
	I1024 19:55:43.924869   63363 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
	I1024 19:55:43.924946   63363 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1024 19:55:43.945392   63363 docker.go:689] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/gvisor-addon:2
	registry.k8s.io/kube-apiserver:v1.28.3
	registry.k8s.io/kube-controller-manager:v1.28.3
	registry.k8s.io/kube-scheduler:v1.28.3
	registry.k8s.io/kube-proxy:v1.28.3
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1024 19:55:43.945419   63363 docker.go:619] Images already preloaded, skipping extraction
	I1024 19:55:43.945471   63363 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1024 19:55:43.965249   63363 docker.go:689] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/gvisor-addon:2
	registry.k8s.io/kube-apiserver:v1.28.3
	registry.k8s.io/kube-scheduler:v1.28.3
	registry.k8s.io/kube-controller-manager:v1.28.3
	registry.k8s.io/kube-proxy:v1.28.3
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1024 19:55:43.965274   63363 cache_images.go:84] Images are preloaded, skipping loading
	I1024 19:55:43.965339   63363 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1024 19:55:43.997808   63363 cni.go:84] Creating CNI manager for ""
	I1024 19:55:43.997829   63363 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1024 19:55:43.997844   63363 kubeadm.go:87] Using pod CIDR: 10.42.0.0/16
	I1024 19:55:43.997874   63363 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.61.68 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-468999 NodeName:newest-cni-468999 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.68"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureArgs:ma
p[] NodeIP:192.168.61.68 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1024 19:55:43.998014   63363 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.68
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "newest-cni-468999"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.68
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.68"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1024 19:55:43.998107   63363 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --feature-gates=ServerSideApply=true --hostname-override=newest-cni-468999 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.68
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:newest-cni-468999 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1024 19:55:43.998159   63363 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1024 19:55:44.007995   63363 binaries.go:44] Found k8s binaries, skipping transfer
	I1024 19:55:44.008077   63363 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1024 19:55:44.016148   63363 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (416 bytes)
	I1024 19:55:44.032032   63363 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1024 19:55:44.048611   63363 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2227 bytes)
	I1024 19:55:44.068079   63363 ssh_runner.go:195] Run: grep 192.168.61.68	control-plane.minikube.internal$ /etc/hosts
	I1024 19:55:44.071846   63363 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.68	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1024 19:55:44.085548   63363 certs.go:56] Setting up /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/newest-cni-468999 for IP: 192.168.61.68
	I1024 19:55:44.085582   63363 certs.go:190] acquiring lock for shared ca certs: {Name:mk82d7d72f62e2b33f42fdd3e948db186320730d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 19:55:44.085744   63363 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17485-9104/.minikube/ca.key
	I1024 19:55:44.085792   63363 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17485-9104/.minikube/proxy-client-ca.key
	I1024 19:55:44.085892   63363 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/newest-cni-468999/client.key
	I1024 19:55:44.085973   63363 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/newest-cni-468999/apiserver.key.d9731a8a
	I1024 19:55:44.086055   63363 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/newest-cni-468999/proxy-client.key
	I1024 19:55:44.086209   63363 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9104/.minikube/certs/home/jenkins/minikube-integration/17485-9104/.minikube/certs/16443.pem (1338 bytes)
	W1024 19:55:44.086261   63363 certs.go:433] ignoring /home/jenkins/minikube-integration/17485-9104/.minikube/certs/home/jenkins/minikube-integration/17485-9104/.minikube/certs/16443_empty.pem, impossibly tiny 0 bytes
	I1024 19:55:44.086279   63363 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9104/.minikube/certs/home/jenkins/minikube-integration/17485-9104/.minikube/certs/ca-key.pem (1679 bytes)
	I1024 19:55:44.086309   63363 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9104/.minikube/certs/home/jenkins/minikube-integration/17485-9104/.minikube/certs/ca.pem (1082 bytes)
	I1024 19:55:44.086341   63363 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9104/.minikube/certs/home/jenkins/minikube-integration/17485-9104/.minikube/certs/cert.pem (1123 bytes)
	I1024 19:55:44.086373   63363 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9104/.minikube/certs/home/jenkins/minikube-integration/17485-9104/.minikube/certs/key.pem (1675 bytes)
	I1024 19:55:44.086431   63363 certs.go:437] found cert: /home/jenkins/minikube-integration/17485-9104/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17485-9104/.minikube/files/etc/ssl/certs/164432.pem (1708 bytes)
	I1024 19:55:44.087272   63363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/newest-cni-468999/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1024 19:55:44.113883   63363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/newest-cni-468999/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1024 19:55:44.139591   63363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/newest-cni-468999/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1024 19:55:44.163989   63363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/newest-cni-468999/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1024 19:55:44.189153   63363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9104/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1024 19:55:44.214584   63363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9104/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1024 19:55:44.239242   63363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9104/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1024 19:55:44.265026   63363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9104/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1024 19:55:44.290933   63363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9104/.minikube/certs/16443.pem --> /usr/share/ca-certificates/16443.pem (1338 bytes)
	I1024 19:55:44.315698   63363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9104/.minikube/files/etc/ssl/certs/164432.pem --> /usr/share/ca-certificates/164432.pem (1708 bytes)
	I1024 19:55:44.341350   63363 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17485-9104/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1024 19:55:44.368004   63363 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1024 19:55:44.386431   63363 ssh_runner.go:195] Run: openssl version
	I1024 19:55:44.392542   63363 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16443.pem && ln -fs /usr/share/ca-certificates/16443.pem /etc/ssl/certs/16443.pem"
	I1024 19:55:44.403002   63363 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16443.pem
	I1024 19:55:44.407701   63363 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 24 19:06 /usr/share/ca-certificates/16443.pem
	I1024 19:55:44.407757   63363 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16443.pem
	I1024 19:55:44.413832   63363 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16443.pem /etc/ssl/certs/51391683.0"
	I1024 19:55:44.423501   63363 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/164432.pem && ln -fs /usr/share/ca-certificates/164432.pem /etc/ssl/certs/164432.pem"
	I1024 19:55:44.433394   63363 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/164432.pem
	I1024 19:55:44.438239   63363 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 24 19:06 /usr/share/ca-certificates/164432.pem
	I1024 19:55:44.438281   63363 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/164432.pem
	I1024 19:55:44.444147   63363 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/164432.pem /etc/ssl/certs/3ec20f2e.0"
	I1024 19:55:44.453961   63363 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1024 19:55:44.463285   63363 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1024 19:55:44.468105   63363 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 24 19:01 /usr/share/ca-certificates/minikubeCA.pem
	I1024 19:55:44.468162   63363 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1024 19:55:44.473679   63363 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1024 19:55:44.483437   63363 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1024 19:55:44.488416   63363 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1024 19:55:44.494521   63363 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1024 19:55:44.500339   63363 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1024 19:55:44.506409   63363 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1024 19:55:44.512320   63363 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1024 19:55:44.518422   63363 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1024 19:55:44.524311   63363 kubeadm.go:404] StartCluster: {Name:newest-cni-468999 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.3 ClusterName:newest-cni-468999 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.68 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:
true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1024 19:55:44.524447   63363 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1024 19:55:44.543577   63363 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1024 19:55:44.553111   63363 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1024 19:55:44.553132   63363 kubeadm.go:636] restartCluster start
	I1024 19:55:44.553193   63363 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1024 19:55:44.562130   63363 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1024 19:55:44.562807   63363 kubeconfig.go:135] verify returned: extract IP: "newest-cni-468999" does not appear in /home/jenkins/minikube-integration/17485-9104/kubeconfig
	I1024 19:55:44.563194   63363 kubeconfig.go:146] "newest-cni-468999" context is missing from /home/jenkins/minikube-integration/17485-9104/kubeconfig - will repair!
	I1024 19:55:44.563816   63363 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17485-9104/kubeconfig: {Name:mk3f1a292620d31d01e0540e90dfb98008d8ef4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 19:55:44.565380   63363 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1024 19:55:44.574671   63363 api_server.go:166] Checking apiserver status ...
	I1024 19:55:44.574734   63363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 19:55:44.585598   63363 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 19:55:44.585616   63363 api_server.go:166] Checking apiserver status ...
	I1024 19:55:44.585659   63363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 19:55:44.596427   63363 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 19:55:45.097119   63363 api_server.go:166] Checking apiserver status ...
	I1024 19:55:45.097221   63363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 19:55:45.109568   63363 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 19:55:45.597150   63363 api_server.go:166] Checking apiserver status ...
	I1024 19:55:45.597234   63363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 19:55:45.609604   63363 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 19:55:46.097236   63363 api_server.go:166] Checking apiserver status ...
	I1024 19:55:46.097322   63363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 19:55:46.111098   63363 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 19:55:43.606649   61522 pod_ready.go:102] pod "metrics-server-74d5856cc6-4pkrg" in "kube-system" namespace has status "Ready":"False"
	I1024 19:55:46.107112   61522 pod_ready.go:102] pod "metrics-server-74d5856cc6-4pkrg" in "kube-system" namespace has status "Ready":"False"
	I1024 19:55:43.382521   61295 pod_ready.go:102] pod "metrics-server-57f55c9bc5-n9drb" in "kube-system" namespace has status "Ready":"False"
	I1024 19:55:45.382624   61295 pod_ready.go:102] pod "metrics-server-57f55c9bc5-n9drb" in "kube-system" namespace has status "Ready":"False"
	I1024 19:55:47.383330   61295 pod_ready.go:102] pod "metrics-server-57f55c9bc5-n9drb" in "kube-system" namespace has status "Ready":"False"
	I1024 19:55:44.088338   61871 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gsn2q" in "kube-system" namespace has status "Ready":"False"
	I1024 19:55:46.587780   61871 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gsn2q" in "kube-system" namespace has status "Ready":"False"
	I1024 19:55:46.597468   63363 api_server.go:166] Checking apiserver status ...
	I1024 19:55:46.597559   63363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 19:55:46.609503   63363 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 19:55:47.097061   63363 api_server.go:166] Checking apiserver status ...
	I1024 19:55:47.097140   63363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 19:55:47.111512   63363 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 19:55:47.596654   63363 api_server.go:166] Checking apiserver status ...
	I1024 19:55:47.596802   63363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 19:55:47.609264   63363 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 19:55:48.096793   63363 api_server.go:166] Checking apiserver status ...
	I1024 19:55:48.096888   63363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 19:55:48.111906   63363 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 19:55:48.597432   63363 api_server.go:166] Checking apiserver status ...
	I1024 19:55:48.597506   63363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 19:55:48.609862   63363 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 19:55:49.097505   63363 api_server.go:166] Checking apiserver status ...
	I1024 19:55:49.097585   63363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 19:55:49.115670   63363 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 19:55:49.597064   63363 api_server.go:166] Checking apiserver status ...
	I1024 19:55:49.597134   63363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 19:55:49.609166   63363 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 19:55:50.096699   63363 api_server.go:166] Checking apiserver status ...
	I1024 19:55:50.096794   63363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 19:55:50.113431   63363 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 19:55:50.596992   63363 api_server.go:166] Checking apiserver status ...
	I1024 19:55:50.597072   63363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 19:55:50.610874   63363 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 19:55:51.097502   63363 api_server.go:166] Checking apiserver status ...
	I1024 19:55:51.097585   63363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 19:55:51.110994   63363 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 19:55:48.108225   61522 pod_ready.go:102] pod "metrics-server-74d5856cc6-4pkrg" in "kube-system" namespace has status "Ready":"False"
	I1024 19:55:50.607037   61522 pod_ready.go:102] pod "metrics-server-74d5856cc6-4pkrg" in "kube-system" namespace has status "Ready":"False"
	I1024 19:55:49.882691   61295 pod_ready.go:102] pod "metrics-server-57f55c9bc5-n9drb" in "kube-system" namespace has status "Ready":"False"
	I1024 19:55:51.882853   61295 pod_ready.go:102] pod "metrics-server-57f55c9bc5-n9drb" in "kube-system" namespace has status "Ready":"False"
	I1024 19:55:49.086396   61871 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gsn2q" in "kube-system" namespace has status "Ready":"False"
	I1024 19:55:51.088770   61871 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gsn2q" in "kube-system" namespace has status "Ready":"False"
	I1024 19:55:53.586438   61871 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gsn2q" in "kube-system" namespace has status "Ready":"False"
	I1024 19:55:51.597221   63363 api_server.go:166] Checking apiserver status ...
	I1024 19:55:51.597298   63363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 19:55:51.610925   63363 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 19:55:52.097201   63363 api_server.go:166] Checking apiserver status ...
	I1024 19:55:52.097281   63363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 19:55:52.112136   63363 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 19:55:52.596664   63363 api_server.go:166] Checking apiserver status ...
	I1024 19:55:52.596735   63363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 19:55:52.609885   63363 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 19:55:53.097463   63363 api_server.go:166] Checking apiserver status ...
	I1024 19:55:53.097547   63363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 19:55:53.112240   63363 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 19:55:53.596702   63363 api_server.go:166] Checking apiserver status ...
	I1024 19:55:53.596763   63363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 19:55:53.609065   63363 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 19:55:54.097445   63363 api_server.go:166] Checking apiserver status ...
	I1024 19:55:54.097513   63363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1024 19:55:54.109764   63363 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1024 19:55:54.575469   63363 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1024 19:55:54.575496   63363 kubeadm.go:1128] stopping kube-system containers ...
	I1024 19:55:54.575569   63363 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1024 19:55:54.599558   63363 docker.go:464] Stopping containers: [d212981fe29b 4da986de348e 8416fd5d3dcb 8ec7bc4583aa a4cfd4dbd82d 786908d0c748 ef6647983f15 d02e263197a6 6ca84f316c87 6de0a8d8129c 5206b5c4163c f88709eca03d aa3733bde55e eb71a7ffc074]
	I1024 19:55:54.599647   63363 ssh_runner.go:195] Run: docker stop d212981fe29b 4da986de348e 8416fd5d3dcb 8ec7bc4583aa a4cfd4dbd82d 786908d0c748 ef6647983f15 d02e263197a6 6ca84f316c87 6de0a8d8129c 5206b5c4163c f88709eca03d aa3733bde55e eb71a7ffc074
	I1024 19:55:54.624523   63363 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1024 19:55:54.642303   63363 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1024 19:55:54.651872   63363 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1024 19:55:54.651936   63363 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1024 19:55:54.661011   63363 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1024 19:55:54.661049   63363 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1024 19:55:54.790529   63363 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1024 19:55:55.476350   63363 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1024 19:55:55.682065   63363 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1024 19:55:55.827258   63363 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1024 19:55:55.925006   63363 api_server.go:52] waiting for apiserver process to appear ...
	I1024 19:55:55.925071   63363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 19:55:55.941384   63363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 19:55:53.108096   61522 pod_ready.go:102] pod "metrics-server-74d5856cc6-4pkrg" in "kube-system" namespace has status "Ready":"False"
	I1024 19:55:55.605350   61522 pod_ready.go:102] pod "metrics-server-74d5856cc6-4pkrg" in "kube-system" namespace has status "Ready":"False"
	I1024 19:55:53.884149   61295 pod_ready.go:102] pod "metrics-server-57f55c9bc5-n9drb" in "kube-system" namespace has status "Ready":"False"
	I1024 19:55:55.887130   61295 pod_ready.go:102] pod "metrics-server-57f55c9bc5-n9drb" in "kube-system" namespace has status "Ready":"False"
	I1024 19:55:55.588475   61871 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gsn2q" in "kube-system" namespace has status "Ready":"False"
	I1024 19:55:57.588965   61871 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gsn2q" in "kube-system" namespace has status "Ready":"False"
	I1024 19:55:56.468426   63363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 19:55:56.968663   63363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 19:55:57.468515   63363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 19:55:57.968687   63363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 19:55:58.468315   63363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 19:55:58.494139   63363 api_server.go:72] duration metric: took 2.56913575s to wait for apiserver process to appear ...
	I1024 19:55:58.494160   63363 api_server.go:88] waiting for apiserver healthz status ...
	I1024 19:55:58.494185   63363 api_server.go:253] Checking apiserver healthz at https://192.168.61.68:8443/healthz ...
	I1024 19:56:01.139973   63363 api_server.go:279] https://192.168.61.68:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1024 19:56:01.140009   63363 api_server.go:103] status: https://192.168.61.68:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1024 19:56:01.140019   63363 api_server.go:253] Checking apiserver healthz at https://192.168.61.68:8443/healthz ...
	I1024 19:56:01.261428   63363 api_server.go:279] https://192.168.61.68:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1024 19:56:01.261455   63363 api_server.go:103] status: https://192.168.61.68:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1024 19:55:57.606445   61522 pod_ready.go:102] pod "metrics-server-74d5856cc6-4pkrg" in "kube-system" namespace has status "Ready":"False"
	I1024 19:55:59.606577   61522 pod_ready.go:102] pod "metrics-server-74d5856cc6-4pkrg" in "kube-system" namespace has status "Ready":"False"
	I1024 19:55:58.384534   61295 pod_ready.go:102] pod "metrics-server-57f55c9bc5-n9drb" in "kube-system" namespace has status "Ready":"False"
	I1024 19:56:00.884112   61295 pod_ready.go:102] pod "metrics-server-57f55c9bc5-n9drb" in "kube-system" namespace has status "Ready":"False"
	I1024 19:56:01.762083   63363 api_server.go:253] Checking apiserver healthz at https://192.168.61.68:8443/healthz ...
	I1024 19:56:01.767410   63363 api_server.go:279] https://192.168.61.68:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1024 19:56:01.767435   63363 api_server.go:103] status: https://192.168.61.68:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1024 19:56:02.261641   63363 api_server.go:253] Checking apiserver healthz at https://192.168.61.68:8443/healthz ...
	I1024 19:56:02.267758   63363 api_server.go:279] https://192.168.61.68:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1024 19:56:02.267784   63363 api_server.go:103] status: https://192.168.61.68:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1024 19:56:02.762204   63363 api_server.go:253] Checking apiserver healthz at https://192.168.61.68:8443/healthz ...
	I1024 19:56:02.767937   63363 api_server.go:279] https://192.168.61.68:8443/healthz returned 200:
	ok
	I1024 19:56:02.779084   63363 api_server.go:141] control plane version: v1.28.3
	I1024 19:56:02.779115   63363 api_server.go:131] duration metric: took 4.2849478s to wait for apiserver health ...
	I1024 19:56:02.779127   63363 cni.go:84] Creating CNI manager for ""
	I1024 19:56:02.779143   63363 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1024 19:56:02.781100   63363 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1024 19:56:02.782513   63363 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1024 19:56:02.796440   63363 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1024 19:56:02.817674   63363 system_pods.go:43] waiting for kube-system pods to appear ...
	I1024 19:56:02.828814   63363 system_pods.go:59] 8 kube-system pods found
	I1024 19:56:02.828854   63363 system_pods.go:61] "coredns-5dd5756b68-vjvrg" [ec695458-e873-4e2d-b4a3-6d7666bbc919] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1024 19:56:02.828862   63363 system_pods.go:61] "etcd-newest-cni-468999" [6280a8c8-2011-4f08-8730-8899dbd41b81] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1024 19:56:02.828869   63363 system_pods.go:61] "kube-apiserver-newest-cni-468999" [74396e87-eb06-4eb8-8042-4c52500e1a1e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1024 19:56:02.828888   63363 system_pods.go:61] "kube-controller-manager-newest-cni-468999" [b0729b57-98c7-46f2-bdd6-973b21db3527] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1024 19:56:02.828898   63363 system_pods.go:61] "kube-proxy-chknl" [e41fbf9c-7c48-4969-8545-7a4d3594fbda] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1024 19:56:02.828904   63363 system_pods.go:61] "kube-scheduler-newest-cni-468999" [a0bfe1ae-841f-4dec-89a9-8298e89cd514] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1024 19:56:02.828911   63363 system_pods.go:61] "metrics-server-57f55c9bc5-fs4gp" [563af571-bc06-40cd-a808-4f8760fcb6ef] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 19:56:02.828919   63363 system_pods.go:61] "storage-provisioner" [174f774c-b352-4e2e-aa3f-2e35197696e8] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1024 19:56:02.828926   63363 system_pods.go:74] duration metric: took 11.233713ms to wait for pod list to return data ...
	I1024 19:56:02.828936   63363 node_conditions.go:102] verifying NodePressure condition ...
	I1024 19:56:02.835494   63363 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1024 19:56:02.835524   63363 node_conditions.go:123] node cpu capacity is 2
	I1024 19:56:02.835535   63363 node_conditions.go:105] duration metric: took 6.594573ms to run NodePressure ...
	I1024 19:56:02.835555   63363 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1024 19:56:03.222812   63363 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1024 19:56:03.247637   63363 ops.go:34] apiserver oom_adj: -16
	I1024 19:56:03.247662   63363 kubeadm.go:640] restartCluster took 18.694523293s
	I1024 19:56:03.247672   63363 kubeadm.go:406] StartCluster complete in 18.723365341s
	I1024 19:56:03.247692   63363 settings.go:142] acquiring lock: {Name:mk36c78ae5c888974883b83cd211b07900a5571c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 19:56:03.247775   63363 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17485-9104/kubeconfig
	I1024 19:56:03.249309   63363 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17485-9104/kubeconfig: {Name:mk3f1a292620d31d01e0540e90dfb98008d8ef4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 19:56:03.249791   63363 config.go:182] Loaded profile config "newest-cni-468999": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1024 19:56:03.249788   63363 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1024 19:56:03.249868   63363 cache.go:107] acquiring lock: {Name:mke60744a4234b419c1d64d246b94fc561986c72 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1024 19:56:03.249905   63363 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-468999"
	I1024 19:56:03.249950   63363 cache.go:115] /home/jenkins/minikube-integration/17485-9104/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2 exists
	I1024 19:56:03.249971   63363 cache.go:96] cache image "gcr.io/k8s-minikube/gvisor-addon:2" -> "/home/jenkins/minikube-integration/17485-9104/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2" took 113.671µs
	I1024 19:56:03.249987   63363 cache.go:80] save to tar file gcr.io/k8s-minikube/gvisor-addon:2 -> /home/jenkins/minikube-integration/17485-9104/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2 succeeded
	I1024 19:56:03.249998   63363 cache.go:87] Successfully saved all images to host disk.
	I1024 19:56:03.250016   63363 addons.go:69] Setting default-storageclass=true in profile "newest-cni-468999"
	I1024 19:56:03.250086   63363 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-468999"
	I1024 19:56:03.249953   63363 addons.go:231] Setting addon storage-provisioner=true in "newest-cni-468999"
	W1024 19:56:03.250143   63363 addons.go:240] addon storage-provisioner should already be in state true
	I1024 19:56:03.250230   63363 config.go:182] Loaded profile config "newest-cni-468999": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1024 19:56:03.250240   63363 host.go:66] Checking if "newest-cni-468999" exists ...
	I1024 19:56:03.250602   63363 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1024 19:56:03.250647   63363 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1024 19:56:03.250655   63363 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1024 19:56:03.250668   63363 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:56:03.250690   63363 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1024 19:56:03.250704   63363 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:56:03.250715   63363 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:56:03.250851   63363 addons.go:69] Setting dashboard=true in profile "newest-cni-468999"
	I1024 19:56:03.250858   63363 addons.go:69] Setting metrics-server=true in profile "newest-cni-468999"
	I1024 19:56:03.250878   63363 addons.go:231] Setting addon dashboard=true in "newest-cni-468999"
	W1024 19:56:03.250891   63363 addons.go:240] addon dashboard should already be in state true
	I1024 19:56:03.250902   63363 addons.go:231] Setting addon metrics-server=true in "newest-cni-468999"
	W1024 19:56:03.250914   63363 addons.go:240] addon metrics-server should already be in state true
	I1024 19:56:03.250974   63363 host.go:66] Checking if "newest-cni-468999" exists ...
	I1024 19:56:03.251001   63363 host.go:66] Checking if "newest-cni-468999" exists ...
	I1024 19:56:03.251378   63363 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1024 19:56:03.251402   63363 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1024 19:56:03.251426   63363 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:56:03.251440   63363 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:56:03.262464   63363 kapi.go:248] "coredns" deployment in "kube-system" namespace and "newest-cni-468999" context rescaled to 1 replicas
	I1024 19:56:03.262507   63363 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.68 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1024 19:56:03.264681   63363 out.go:177] * Verifying Kubernetes components...
	I1024 19:56:03.266186   63363 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1024 19:56:03.270647   63363 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44961
	I1024 19:56:03.270849   63363 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34375
	I1024 19:56:03.271188   63363 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:56:03.271241   63363 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46061
	I1024 19:56:03.271522   63363 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:56:03.271682   63363 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46523
	I1024 19:56:03.271793   63363 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41045
	I1024 19:56:03.271982   63363 main.go:141] libmachine: Using API Version  1
	I1024 19:56:03.272024   63363 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:56:03.272189   63363 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:56:03.272241   63363 main.go:141] libmachine: Using API Version  1
	I1024 19:56:03.272258   63363 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:56:03.272628   63363 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:56:03.272679   63363 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:56:03.272690   63363 main.go:141] libmachine: Using API Version  1
	I1024 19:56:03.272700   63363 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:56:03.272706   63363 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:56:03.272766   63363 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:56:03.272908   63363 main.go:141] libmachine: (newest-cni-468999) Calling .GetState
	I1024 19:56:03.273169   63363 main.go:141] libmachine: Using API Version  1
	I1024 19:56:03.273194   63363 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:56:03.273215   63363 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1024 19:56:03.273261   63363 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:56:03.273293   63363 main.go:141] libmachine: Using API Version  1
	I1024 19:56:03.273316   63363 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:56:03.273553   63363 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:56:03.273719   63363 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:56:03.273732   63363 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:56:03.273780   63363 main.go:141] libmachine: (newest-cni-468999) Calling .GetState
	I1024 19:56:03.274374   63363 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1024 19:56:03.274417   63363 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:56:03.274519   63363 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1024 19:56:03.274553   63363 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:56:03.277068   63363 addons.go:231] Setting addon default-storageclass=true in "newest-cni-468999"
	W1024 19:56:03.277089   63363 addons.go:240] addon default-storageclass should already be in state true
	I1024 19:56:03.277114   63363 host.go:66] Checking if "newest-cni-468999" exists ...
	I1024 19:56:03.277506   63363 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1024 19:56:03.277547   63363 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:56:03.278548   63363 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1024 19:56:03.278596   63363 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:56:03.294186   63363 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38683
	I1024 19:56:03.294245   63363 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34661
	I1024 19:56:03.294345   63363 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34517
	I1024 19:56:03.294786   63363 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:56:03.294838   63363 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:56:03.295264   63363 main.go:141] libmachine: Using API Version  1
	I1024 19:56:03.295284   63363 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:56:03.295369   63363 main.go:141] libmachine: Using API Version  1
	I1024 19:56:03.295391   63363 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:56:03.295653   63363 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:56:03.295802   63363 main.go:141] libmachine: (newest-cni-468999) Calling .GetState
	I1024 19:56:03.295848   63363 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:56:03.295996   63363 main.go:141] libmachine: (newest-cni-468999) Calling .GetState
	I1024 19:56:03.296622   63363 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:56:03.297210   63363 main.go:141] libmachine: Using API Version  1
	I1024 19:56:03.297230   63363 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:56:03.297590   63363 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:56:03.297777   63363 main.go:141] libmachine: (newest-cni-468999) Calling .GetState
	I1024 19:56:03.297838   63363 main.go:141] libmachine: (newest-cni-468999) Calling .DriverName
	I1024 19:56:03.300276   63363 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1024 19:56:03.301864   63363 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1024 19:56:03.301884   63363 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1024 19:56:03.301903   63363 main.go:141] libmachine: (newest-cni-468999) Calling .GetSSHHostname
	I1024 19:56:03.300300   63363 main.go:141] libmachine: (newest-cni-468999) Calling .DriverName
	I1024 19:56:03.298504   63363 main.go:141] libmachine: (newest-cni-468999) Calling .DriverName
	I1024 19:56:03.301218   63363 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37765
	I1024 19:56:03.303774   63363 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1024 19:56:03.302749   63363 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:56:03.304757   63363 main.go:141] libmachine: (newest-cni-468999) DBG | domain newest-cni-468999 has defined MAC address 52:54:00:5f:c0:60 in network mk-newest-cni-468999
	I1024 19:56:03.305313   63363 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1024 19:56:03.305347   63363 main.go:141] libmachine: (newest-cni-468999) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:c0:60", ip: ""} in network mk-newest-cni-468999: {Iface:virbr3 ExpiryTime:2023-10-24 20:55:29 +0000 UTC Type:0 Mac:52:54:00:5f:c0:60 Iaid: IPaddr:192.168.61.68 Prefix:24 Hostname:newest-cni-468999 Clientid:01:52:54:00:5f:c0:60}
	I1024 19:56:03.306918   63363 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I1024 19:56:00.088935   61871 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gsn2q" in "kube-system" namespace has status "Ready":"False"
	I1024 19:56:02.589160   61871 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gsn2q" in "kube-system" namespace has status "Ready":"False"
	I1024 19:56:03.305291   63363 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1024 19:56:03.305512   63363 main.go:141] libmachine: (newest-cni-468999) Calling .GetSSHPort
	I1024 19:56:03.305926   63363 main.go:141] libmachine: Using API Version  1
	I1024 19:56:03.306947   63363 main.go:141] libmachine: (newest-cni-468999) DBG | domain newest-cni-468999 has defined IP address 192.168.61.68 and MAC address 52:54:00:5f:c0:60 in network mk-newest-cni-468999
	I1024 19:56:03.308493   63363 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1024 19:56:03.308527   63363 main.go:141] libmachine: (newest-cni-468999) Calling .GetSSHHostname
	I1024 19:56:03.308532   63363 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:56:03.308540   63363 addons.go:423] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1024 19:56:03.308555   63363 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1024 19:56:03.308577   63363 main.go:141] libmachine: (newest-cni-468999) Calling .GetSSHHostname
	I1024 19:56:03.308674   63363 main.go:141] libmachine: (newest-cni-468999) Calling .GetSSHKeyPath
	I1024 19:56:03.308925   63363 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:56:03.308971   63363 main.go:141] libmachine: (newest-cni-468999) Calling .GetSSHUsername
	I1024 19:56:03.309530   63363 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1024 19:56:03.309572   63363 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:56:03.309685   63363 sshutil.go:53] new ssh client: &{IP:192.168.61.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9104/.minikube/machines/newest-cni-468999/id_rsa Username:docker}
	I1024 19:56:03.312888   63363 main.go:141] libmachine: (newest-cni-468999) DBG | domain newest-cni-468999 has defined MAC address 52:54:00:5f:c0:60 in network mk-newest-cni-468999
	I1024 19:56:03.313940   63363 main.go:141] libmachine: (newest-cni-468999) DBG | domain newest-cni-468999 has defined MAC address 52:54:00:5f:c0:60 in network mk-newest-cni-468999
	I1024 19:56:03.314693   63363 main.go:141] libmachine: (newest-cni-468999) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:c0:60", ip: ""} in network mk-newest-cni-468999: {Iface:virbr3 ExpiryTime:2023-10-24 20:55:29 +0000 UTC Type:0 Mac:52:54:00:5f:c0:60 Iaid: IPaddr:192.168.61.68 Prefix:24 Hostname:newest-cni-468999 Clientid:01:52:54:00:5f:c0:60}
	I1024 19:56:03.314726   63363 main.go:141] libmachine: (newest-cni-468999) DBG | domain newest-cni-468999 has defined IP address 192.168.61.68 and MAC address 52:54:00:5f:c0:60 in network mk-newest-cni-468999
	I1024 19:56:03.314832   63363 main.go:141] libmachine: (newest-cni-468999) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:c0:60", ip: ""} in network mk-newest-cni-468999: {Iface:virbr3 ExpiryTime:2023-10-24 20:55:29 +0000 UTC Type:0 Mac:52:54:00:5f:c0:60 Iaid: IPaddr:192.168.61.68 Prefix:24 Hostname:newest-cni-468999 Clientid:01:52:54:00:5f:c0:60}
	I1024 19:56:03.314908   63363 main.go:141] libmachine: (newest-cni-468999) DBG | domain newest-cni-468999 has defined IP address 192.168.61.68 and MAC address 52:54:00:5f:c0:60 in network mk-newest-cni-468999
	I1024 19:56:03.315162   63363 main.go:141] libmachine: (newest-cni-468999) Calling .GetSSHPort
	I1024 19:56:03.315238   63363 main.go:141] libmachine: (newest-cni-468999) Calling .GetSSHPort
	I1024 19:56:03.315317   63363 main.go:141] libmachine: (newest-cni-468999) Calling .GetSSHKeyPath
	I1024 19:56:03.315526   63363 main.go:141] libmachine: (newest-cni-468999) Calling .GetSSHKeyPath
	I1024 19:56:03.315538   63363 main.go:141] libmachine: (newest-cni-468999) Calling .GetSSHUsername
	I1024 19:56:03.315657   63363 sshutil.go:53] new ssh client: &{IP:192.168.61.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9104/.minikube/machines/newest-cni-468999/id_rsa Username:docker}
	I1024 19:56:03.316205   63363 main.go:141] libmachine: (newest-cni-468999) Calling .GetSSHUsername
	I1024 19:56:03.316389   63363 sshutil.go:53] new ssh client: &{IP:192.168.61.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9104/.minikube/machines/newest-cni-468999/id_rsa Username:docker}
	I1024 19:56:03.325556   63363 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33073
	I1024 19:56:03.325762   63363 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32785
	I1024 19:56:03.326372   63363 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:56:03.326455   63363 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:56:03.326937   63363 main.go:141] libmachine: Using API Version  1
	I1024 19:56:03.326959   63363 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:56:03.327170   63363 main.go:141] libmachine: Using API Version  1
	I1024 19:56:03.327189   63363 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:56:03.327337   63363 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:56:03.327620   63363 main.go:141] libmachine: (newest-cni-468999) Calling .GetState
	I1024 19:56:03.327877   63363 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:56:03.328054   63363 main.go:141] libmachine: (newest-cni-468999) Calling .DriverName
	I1024 19:56:03.328528   63363 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1024 19:56:03.328556   63363 main.go:141] libmachine: (newest-cni-468999) Calling .GetSSHHostname
	I1024 19:56:03.329586   63363 main.go:141] libmachine: (newest-cni-468999) Calling .DriverName
	I1024 19:56:03.329850   63363 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1024 19:56:03.329860   63363 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1024 19:56:03.329872   63363 main.go:141] libmachine: (newest-cni-468999) Calling .GetSSHHostname
	I1024 19:56:03.332397   63363 main.go:141] libmachine: (newest-cni-468999) DBG | domain newest-cni-468999 has defined MAC address 52:54:00:5f:c0:60 in network mk-newest-cni-468999
	I1024 19:56:03.333029   63363 main.go:141] libmachine: (newest-cni-468999) DBG | domain newest-cni-468999 has defined MAC address 52:54:00:5f:c0:60 in network mk-newest-cni-468999
	I1024 19:56:03.333686   63363 main.go:141] libmachine: (newest-cni-468999) Calling .GetSSHPort
	I1024 19:56:03.333764   63363 main.go:141] libmachine: (newest-cni-468999) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:c0:60", ip: ""} in network mk-newest-cni-468999: {Iface:virbr3 ExpiryTime:2023-10-24 20:55:29 +0000 UTC Type:0 Mac:52:54:00:5f:c0:60 Iaid: IPaddr:192.168.61.68 Prefix:24 Hostname:newest-cni-468999 Clientid:01:52:54:00:5f:c0:60}
	I1024 19:56:03.333783   63363 main.go:141] libmachine: (newest-cni-468999) DBG | domain newest-cni-468999 has defined IP address 192.168.61.68 and MAC address 52:54:00:5f:c0:60 in network mk-newest-cni-468999
	I1024 19:56:03.333838   63363 main.go:141] libmachine: (newest-cni-468999) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:c0:60", ip: ""} in network mk-newest-cni-468999: {Iface:virbr3 ExpiryTime:2023-10-24 20:55:29 +0000 UTC Type:0 Mac:52:54:00:5f:c0:60 Iaid: IPaddr:192.168.61.68 Prefix:24 Hostname:newest-cni-468999 Clientid:01:52:54:00:5f:c0:60}
	I1024 19:56:03.333855   63363 main.go:141] libmachine: (newest-cni-468999) Calling .GetSSHKeyPath
	I1024 19:56:03.333870   63363 main.go:141] libmachine: (newest-cni-468999) DBG | domain newest-cni-468999 has defined IP address 192.168.61.68 and MAC address 52:54:00:5f:c0:60 in network mk-newest-cni-468999
	I1024 19:56:03.334053   63363 main.go:141] libmachine: (newest-cni-468999) Calling .GetSSHPort
	I1024 19:56:03.334055   63363 main.go:141] libmachine: (newest-cni-468999) Calling .GetSSHUsername
	I1024 19:56:03.334241   63363 sshutil.go:53] new ssh client: &{IP:192.168.61.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9104/.minikube/machines/newest-cni-468999/id_rsa Username:docker}
	I1024 19:56:03.334798   63363 main.go:141] libmachine: (newest-cni-468999) Calling .GetSSHKeyPath
	I1024 19:56:03.334954   63363 main.go:141] libmachine: (newest-cni-468999) Calling .GetSSHUsername
	I1024 19:56:03.335180   63363 sshutil.go:53] new ssh client: &{IP:192.168.61.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9104/.minikube/machines/newest-cni-468999/id_rsa Username:docker}
	I1024 19:56:03.579586   63363 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1024 19:56:03.763810   63363 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1024 19:56:03.834573   63363 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1024 19:56:03.834602   63363 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1024 19:56:03.850480   63363 addons.go:423] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1024 19:56:03.850510   63363 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1024 19:56:03.952697   63363 addons.go:423] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1024 19:56:03.952730   63363 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1024 19:56:03.954569   63363 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1024 19:56:03.954592   63363 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1024 19:56:04.073742   63363 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1024 19:56:04.073763   63363 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1024 19:56:04.145243   63363 addons.go:423] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1024 19:56:04.145271   63363 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1024 19:56:04.212856   63363 addons.go:423] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1024 19:56:04.212881   63363 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1024 19:56:04.271879   63363 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.021199475s)
	I1024 19:56:04.271938   63363 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.005720239s)
	I1024 19:56:04.271974   63363 start.go:899] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1024 19:56:04.271989   63363 api_server.go:52] waiting for apiserver process to appear ...
	I1024 19:56:04.272036   63363 docker.go:689] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/gvisor-addon:2
	registry.k8s.io/kube-apiserver:v1.28.3
	registry.k8s.io/kube-controller-manager:v1.28.3
	registry.k8s.io/kube-scheduler:v1.28.3
	registry.k8s.io/kube-proxy:v1.28.3
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1024 19:56:04.272064   63363 cache_images.go:84] Images are preloaded, skipping loading
	I1024 19:56:04.272068   63363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 19:56:04.272077   63363 cache_images.go:262] succeeded pushing to: newest-cni-468999
	I1024 19:56:04.272083   63363 cache_images.go:263] failed pushing to: 
	I1024 19:56:04.272107   63363 main.go:141] libmachine: Making call to close driver server
	I1024 19:56:04.272121   63363 main.go:141] libmachine: (newest-cni-468999) Calling .Close
	I1024 19:56:04.272547   63363 main.go:141] libmachine: (newest-cni-468999) DBG | Closing plugin on server side
	I1024 19:56:04.272562   63363 main.go:141] libmachine: Successfully made call to close driver server
	I1024 19:56:04.272585   63363 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 19:56:04.272611   63363 main.go:141] libmachine: Making call to close driver server
	I1024 19:56:04.272622   63363 main.go:141] libmachine: (newest-cni-468999) Calling .Close
	I1024 19:56:04.272892   63363 main.go:141] libmachine: (newest-cni-468999) DBG | Closing plugin on server side
	I1024 19:56:04.272944   63363 main.go:141] libmachine: Successfully made call to close driver server
	I1024 19:56:04.272955   63363 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 19:56:04.273351   63363 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1024 19:56:04.319411   63363 addons.go:423] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1024 19:56:04.319434   63363 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1024 19:56:04.413270   63363 addons.go:423] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1024 19:56:04.413295   63363 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1024 19:56:04.445572   63363 addons.go:423] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1024 19:56:04.445599   63363 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1024 19:56:04.465513   63363 addons.go:423] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1024 19:56:04.465530   63363 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1024 19:56:04.512338   63363 addons.go:423] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1024 19:56:04.512364   63363 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1024 19:56:04.532361   63363 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1024 19:56:05.851787   63363 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.272166782s)
	I1024 19:56:05.851846   63363 main.go:141] libmachine: Making call to close driver server
	I1024 19:56:05.851860   63363 main.go:141] libmachine: (newest-cni-468999) Calling .Close
	I1024 19:56:05.851879   63363 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.088034262s)
	I1024 19:56:05.851914   63363 main.go:141] libmachine: Making call to close driver server
	I1024 19:56:05.851931   63363 main.go:141] libmachine: (newest-cni-468999) Calling .Close
	I1024 19:56:05.851965   63363 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.579872884s)
	I1024 19:56:05.851986   63363 api_server.go:72] duration metric: took 2.589451097s to wait for apiserver process to appear ...
	I1024 19:56:05.851992   63363 api_server.go:88] waiting for apiserver healthz status ...
	I1024 19:56:05.852007   63363 api_server.go:253] Checking apiserver healthz at https://192.168.61.68:8443/healthz ...
	I1024 19:56:05.852249   63363 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.57886908s)
	I1024 19:56:05.852412   63363 main.go:141] libmachine: Making call to close driver server
	I1024 19:56:05.852444   63363 main.go:141] libmachine: (newest-cni-468999) Calling .Close
	I1024 19:56:05.852636   63363 main.go:141] libmachine: Successfully made call to close driver server
	I1024 19:56:05.852653   63363 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 19:56:05.852663   63363 main.go:141] libmachine: Making call to close driver server
	I1024 19:56:05.852671   63363 main.go:141] libmachine: (newest-cni-468999) Calling .Close
	I1024 19:56:05.853893   63363 main.go:141] libmachine: (newest-cni-468999) DBG | Closing plugin on server side
	I1024 19:56:05.853911   63363 main.go:141] libmachine: Successfully made call to close driver server
	I1024 19:56:05.853921   63363 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 19:56:05.853930   63363 main.go:141] libmachine: Making call to close driver server
	I1024 19:56:05.853939   63363 main.go:141] libmachine: (newest-cni-468999) Calling .Close
	I1024 19:56:05.854056   63363 main.go:141] libmachine: (newest-cni-468999) DBG | Closing plugin on server side
	I1024 19:56:05.854078   63363 main.go:141] libmachine: (newest-cni-468999) DBG | Closing plugin on server side
	I1024 19:56:05.854100   63363 main.go:141] libmachine: Successfully made call to close driver server
	I1024 19:56:05.854116   63363 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 19:56:05.854132   63363 main.go:141] libmachine: Making call to close driver server
	I1024 19:56:05.854143   63363 main.go:141] libmachine: (newest-cni-468999) Calling .Close
	I1024 19:56:05.854168   63363 main.go:141] libmachine: Successfully made call to close driver server
	I1024 19:56:05.854179   63363 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 19:56:05.854201   63363 main.go:141] libmachine: Successfully made call to close driver server
	I1024 19:56:05.854206   63363 main.go:141] libmachine: (newest-cni-468999) DBG | Closing plugin on server side
	I1024 19:56:05.854211   63363 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 19:56:05.854418   63363 main.go:141] libmachine: (newest-cni-468999) DBG | Closing plugin on server side
	I1024 19:56:05.854420   63363 main.go:141] libmachine: Successfully made call to close driver server
	I1024 19:56:05.854439   63363 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 19:56:05.854451   63363 addons.go:467] Verifying addon metrics-server=true in "newest-cni-468999"
	I1024 19:56:05.866799   63363 api_server.go:279] https://192.168.61.68:8443/healthz returned 200:
	ok
	I1024 19:56:05.867468   63363 main.go:141] libmachine: Making call to close driver server
	I1024 19:56:05.867493   63363 main.go:141] libmachine: (newest-cni-468999) Calling .Close
	I1024 19:56:05.867827   63363 main.go:141] libmachine: Successfully made call to close driver server
	I1024 19:56:05.867843   63363 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 19:56:05.867905   63363 main.go:141] libmachine: (newest-cni-468999) DBG | Closing plugin on server side
	I1024 19:56:05.868285   63363 api_server.go:141] control plane version: v1.28.3
	I1024 19:56:05.868306   63363 api_server.go:131] duration metric: took 16.306903ms to wait for apiserver health ...
	I1024 19:56:05.868316   63363 system_pods.go:43] waiting for kube-system pods to appear ...
	I1024 19:56:05.875764   63363 system_pods.go:59] 8 kube-system pods found
	I1024 19:56:05.875796   63363 system_pods.go:61] "coredns-5dd5756b68-vjvrg" [ec695458-e873-4e2d-b4a3-6d7666bbc919] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1024 19:56:05.875809   63363 system_pods.go:61] "etcd-newest-cni-468999" [6280a8c8-2011-4f08-8730-8899dbd41b81] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1024 19:56:05.875820   63363 system_pods.go:61] "kube-apiserver-newest-cni-468999" [74396e87-eb06-4eb8-8042-4c52500e1a1e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1024 19:56:05.875833   63363 system_pods.go:61] "kube-controller-manager-newest-cni-468999" [b0729b57-98c7-46f2-bdd6-973b21db3527] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1024 19:56:05.875843   63363 system_pods.go:61] "kube-proxy-chknl" [e41fbf9c-7c48-4969-8545-7a4d3594fbda] Running
	I1024 19:56:05.875856   63363 system_pods.go:61] "kube-scheduler-newest-cni-468999" [a0bfe1ae-841f-4dec-89a9-8298e89cd514] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1024 19:56:05.875870   63363 system_pods.go:61] "metrics-server-57f55c9bc5-fs4gp" [563af571-bc06-40cd-a808-4f8760fcb6ef] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 19:56:05.875881   63363 system_pods.go:61] "storage-provisioner" [174f774c-b352-4e2e-aa3f-2e35197696e8] Running
	I1024 19:56:05.875892   63363 system_pods.go:74] duration metric: took 7.56859ms to wait for pod list to return data ...
	I1024 19:56:05.875904   63363 default_sa.go:34] waiting for default service account to be created ...
	I1024 19:56:05.878635   63363 default_sa.go:45] found service account: "default"
	I1024 19:56:05.878658   63363 default_sa.go:55] duration metric: took 2.744761ms for default service account to be created ...
	I1024 19:56:05.878669   63363 kubeadm.go:581] duration metric: took 2.616134155s to wait for : map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] ...
	I1024 19:56:05.878688   63363 node_conditions.go:102] verifying NodePressure condition ...
	I1024 19:56:05.883788   63363 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1024 19:56:05.883822   63363 node_conditions.go:123] node cpu capacity is 2
	I1024 19:56:05.883833   63363 node_conditions.go:105] duration metric: took 5.093634ms to run NodePressure ...
	I1024 19:56:05.883846   63363 start.go:228] waiting for startup goroutines ...
	I1024 19:56:06.372117   63363 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.839691606s)
	I1024 19:56:06.372178   63363 main.go:141] libmachine: Making call to close driver server
	I1024 19:56:06.372193   63363 main.go:141] libmachine: (newest-cni-468999) Calling .Close
	I1024 19:56:06.372641   63363 main.go:141] libmachine: Successfully made call to close driver server
	I1024 19:56:06.372661   63363 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 19:56:06.372671   63363 main.go:141] libmachine: Making call to close driver server
	I1024 19:56:06.372669   63363 main.go:141] libmachine: (newest-cni-468999) DBG | Closing plugin on server side
	I1024 19:56:06.372692   63363 main.go:141] libmachine: (newest-cni-468999) Calling .Close
	I1024 19:56:06.372929   63363 main.go:141] libmachine: (newest-cni-468999) DBG | Closing plugin on server side
	I1024 19:56:06.372952   63363 main.go:141] libmachine: Successfully made call to close driver server
	I1024 19:56:06.372964   63363 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 19:56:06.374649   63363 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-468999 addons enable metrics-server	
	
	
	I1024 19:56:06.376438   63363 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass, dashboard
	I1024 19:56:06.377876   63363 addons.go:502] enable addons completed in 3.128145697s: enabled=[storage-provisioner metrics-server default-storageclass dashboard]
	I1024 19:56:06.377945   63363 start.go:233] waiting for cluster config update ...
	I1024 19:56:06.377971   63363 start.go:242] writing updated cluster config ...
	I1024 19:56:06.378324   63363 ssh_runner.go:195] Run: rm -f paused
	I1024 19:56:06.446525   63363 start.go:600] kubectl: 1.28.3, cluster: 1.28.3 (minor skew: 0)
	I1024 19:56:06.448405   63363 out.go:177] * Done! kubectl is now configured to use "newest-cni-468999" cluster and "default" namespace by default
	I1024 19:56:02.105513   61522 pod_ready.go:102] pod "metrics-server-74d5856cc6-4pkrg" in "kube-system" namespace has status "Ready":"False"
	I1024 19:56:04.106805   61522 pod_ready.go:102] pod "metrics-server-74d5856cc6-4pkrg" in "kube-system" namespace has status "Ready":"False"
	I1024 19:56:06.606095   61522 pod_ready.go:102] pod "metrics-server-74d5856cc6-4pkrg" in "kube-system" namespace has status "Ready":"False"
	I1024 19:56:03.384766   61295 pod_ready.go:102] pod "metrics-server-57f55c9bc5-n9drb" in "kube-system" namespace has status "Ready":"False"
	I1024 19:56:05.885559   61295 pod_ready.go:102] pod "metrics-server-57f55c9bc5-n9drb" in "kube-system" namespace has status "Ready":"False"
	I1024 19:56:04.591701   61871 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gsn2q" in "kube-system" namespace has status "Ready":"False"
	I1024 19:56:07.089203   61871 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gsn2q" in "kube-system" namespace has status "Ready":"False"
	I1024 19:56:08.606526   61522 pod_ready.go:102] pod "metrics-server-74d5856cc6-4pkrg" in "kube-system" namespace has status "Ready":"False"
	I1024 19:56:11.105396   61522 pod_ready.go:102] pod "metrics-server-74d5856cc6-4pkrg" in "kube-system" namespace has status "Ready":"False"
	I1024 19:56:08.383272   61295 pod_ready.go:102] pod "metrics-server-57f55c9bc5-n9drb" in "kube-system" namespace has status "Ready":"False"
	I1024 19:56:10.882927   61295 pod_ready.go:102] pod "metrics-server-57f55c9bc5-n9drb" in "kube-system" namespace has status "Ready":"False"
	I1024 19:56:09.092005   61871 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gsn2q" in "kube-system" namespace has status "Ready":"False"
	I1024 19:56:11.587071   61871 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gsn2q" in "kube-system" namespace has status "Ready":"False"
	I1024 19:56:13.587107   61871 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gsn2q" in "kube-system" namespace has status "Ready":"False"
	I1024 19:56:13.603975   61522 pod_ready.go:102] pod "metrics-server-74d5856cc6-4pkrg" in "kube-system" namespace has status "Ready":"False"
	I1024 19:56:15.604810   61522 pod_ready.go:102] pod "metrics-server-74d5856cc6-4pkrg" in "kube-system" namespace has status "Ready":"False"
	I1024 19:56:12.883656   61295 pod_ready.go:102] pod "metrics-server-57f55c9bc5-n9drb" in "kube-system" namespace has status "Ready":"False"
	I1024 19:56:15.387231   61295 pod_ready.go:102] pod "metrics-server-57f55c9bc5-n9drb" in "kube-system" namespace has status "Ready":"False"
	I1024 19:56:16.087351   61871 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gsn2q" in "kube-system" namespace has status "Ready":"False"
	I1024 19:56:18.588835   61871 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gsn2q" in "kube-system" namespace has status "Ready":"False"
	I1024 19:56:17.604955   61522 pod_ready.go:102] pod "metrics-server-74d5856cc6-4pkrg" in "kube-system" namespace has status "Ready":"False"
	I1024 19:56:19.606700   61522 pod_ready.go:102] pod "metrics-server-74d5856cc6-4pkrg" in "kube-system" namespace has status "Ready":"False"
	I1024 19:56:17.882253   61295 pod_ready.go:102] pod "metrics-server-57f55c9bc5-n9drb" in "kube-system" namespace has status "Ready":"False"
	I1024 19:56:19.882943   61295 pod_ready.go:102] pod "metrics-server-57f55c9bc5-n9drb" in "kube-system" namespace has status "Ready":"False"
	I1024 19:56:21.883009   61295 pod_ready.go:102] pod "metrics-server-57f55c9bc5-n9drb" in "kube-system" namespace has status "Ready":"False"
	I1024 19:56:20.589458   61871 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gsn2q" in "kube-system" namespace has status "Ready":"False"
	I1024 19:56:23.088120   61871 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gsn2q" in "kube-system" namespace has status "Ready":"False"
	I1024 19:56:22.107023   61522 pod_ready.go:102] pod "metrics-server-74d5856cc6-4pkrg" in "kube-system" namespace has status "Ready":"False"
	I1024 19:56:24.605802   61522 pod_ready.go:102] pod "metrics-server-74d5856cc6-4pkrg" in "kube-system" namespace has status "Ready":"False"
	I1024 19:56:24.382732   61295 pod_ready.go:102] pod "metrics-server-57f55c9bc5-n9drb" in "kube-system" namespace has status "Ready":"False"
	I1024 19:56:26.382808   61295 pod_ready.go:102] pod "metrics-server-57f55c9bc5-n9drb" in "kube-system" namespace has status "Ready":"False"
	I1024 19:56:25.587568   61871 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gsn2q" in "kube-system" namespace has status "Ready":"False"
	I1024 19:56:28.085802   61871 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gsn2q" in "kube-system" namespace has status "Ready":"False"
	I1024 19:56:27.103316   61522 pod_ready.go:102] pod "metrics-server-74d5856cc6-4pkrg" in "kube-system" namespace has status "Ready":"False"
	I1024 19:56:29.104896   61522 pod_ready.go:102] pod "metrics-server-74d5856cc6-4pkrg" in "kube-system" namespace has status "Ready":"False"
	I1024 19:56:31.609237   61522 pod_ready.go:102] pod "metrics-server-74d5856cc6-4pkrg" in "kube-system" namespace has status "Ready":"False"
	I1024 19:56:28.383334   61295 pod_ready.go:102] pod "metrics-server-57f55c9bc5-n9drb" in "kube-system" namespace has status "Ready":"False"
	I1024 19:56:30.883360   61295 pod_ready.go:102] pod "metrics-server-57f55c9bc5-n9drb" in "kube-system" namespace has status "Ready":"False"
	I1024 19:56:30.088084   61871 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gsn2q" in "kube-system" namespace has status "Ready":"False"
	I1024 19:56:32.587483   61871 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gsn2q" in "kube-system" namespace has status "Ready":"False"
	I1024 19:56:34.104328   61522 pod_ready.go:102] pod "metrics-server-74d5856cc6-4pkrg" in "kube-system" namespace has status "Ready":"False"
	I1024 19:56:36.105855   61522 pod_ready.go:102] pod "metrics-server-74d5856cc6-4pkrg" in "kube-system" namespace has status "Ready":"False"
	I1024 19:56:33.382880   61295 pod_ready.go:102] pod "metrics-server-57f55c9bc5-n9drb" in "kube-system" namespace has status "Ready":"False"
	I1024 19:56:35.883795   61295 pod_ready.go:102] pod "metrics-server-57f55c9bc5-n9drb" in "kube-system" namespace has status "Ready":"False"
	I1024 19:56:35.087788   61871 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gsn2q" in "kube-system" namespace has status "Ready":"False"
	I1024 19:56:37.090944   61871 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gsn2q" in "kube-system" namespace has status "Ready":"False"
	I1024 19:56:38.604570   61522 pod_ready.go:102] pod "metrics-server-74d5856cc6-4pkrg" in "kube-system" namespace has status "Ready":"False"
	I1024 19:56:41.105197   61522 pod_ready.go:102] pod "metrics-server-74d5856cc6-4pkrg" in "kube-system" namespace has status "Ready":"False"
	I1024 19:56:38.383275   61295 pod_ready.go:102] pod "metrics-server-57f55c9bc5-n9drb" in "kube-system" namespace has status "Ready":"False"
	I1024 19:56:40.882358   61295 pod_ready.go:102] pod "metrics-server-57f55c9bc5-n9drb" in "kube-system" namespace has status "Ready":"False"
	I1024 19:56:39.586816   61871 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gsn2q" in "kube-system" namespace has status "Ready":"False"
	I1024 19:56:42.087054   61871 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gsn2q" in "kube-system" namespace has status "Ready":"False"
	I1024 19:56:43.105431   61522 pod_ready.go:102] pod "metrics-server-74d5856cc6-4pkrg" in "kube-system" namespace has status "Ready":"False"
	I1024 19:56:45.105764   61522 pod_ready.go:102] pod "metrics-server-74d5856cc6-4pkrg" in "kube-system" namespace has status "Ready":"False"
	I1024 19:56:42.883185   61295 pod_ready.go:102] pod "metrics-server-57f55c9bc5-n9drb" in "kube-system" namespace has status "Ready":"False"
	I1024 19:56:44.883795   61295 pod_ready.go:102] pod "metrics-server-57f55c9bc5-n9drb" in "kube-system" namespace has status "Ready":"False"
	I1024 19:56:47.382655   61295 pod_ready.go:102] pod "metrics-server-57f55c9bc5-n9drb" in "kube-system" namespace has status "Ready":"False"
	I1024 19:56:44.087279   61871 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gsn2q" in "kube-system" namespace has status "Ready":"False"
	I1024 19:56:46.087801   61871 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gsn2q" in "kube-system" namespace has status "Ready":"False"
	I1024 19:56:48.088209   61871 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gsn2q" in "kube-system" namespace has status "Ready":"False"
	I1024 19:56:47.106807   61522 pod_ready.go:102] pod "metrics-server-74d5856cc6-4pkrg" in "kube-system" namespace has status "Ready":"False"
	I1024 19:56:49.605516   61522 pod_ready.go:102] pod "metrics-server-74d5856cc6-4pkrg" in "kube-system" namespace has status "Ready":"False"
	I1024 19:56:51.606009   61522 pod_ready.go:102] pod "metrics-server-74d5856cc6-4pkrg" in "kube-system" namespace has status "Ready":"False"
	I1024 19:56:49.383254   61295 pod_ready.go:102] pod "metrics-server-57f55c9bc5-n9drb" in "kube-system" namespace has status "Ready":"False"
	I1024 19:56:51.881890   61295 pod_ready.go:102] pod "metrics-server-57f55c9bc5-n9drb" in "kube-system" namespace has status "Ready":"False"
	I1024 19:56:50.587274   61871 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gsn2q" in "kube-system" namespace has status "Ready":"False"
	I1024 19:56:53.086963   61871 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gsn2q" in "kube-system" namespace has status "Ready":"False"
	I1024 19:56:54.104976   61522 pod_ready.go:102] pod "metrics-server-74d5856cc6-4pkrg" in "kube-system" namespace has status "Ready":"False"
	I1024 19:56:56.106003   61522 pod_ready.go:102] pod "metrics-server-74d5856cc6-4pkrg" in "kube-system" namespace has status "Ready":"False"
	I1024 19:56:53.882526   61295 pod_ready.go:102] pod "metrics-server-57f55c9bc5-n9drb" in "kube-system" namespace has status "Ready":"False"
	I1024 19:56:56.382405   61295 pod_ready.go:102] pod "metrics-server-57f55c9bc5-n9drb" in "kube-system" namespace has status "Ready":"False"
	I1024 19:56:55.088053   61871 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gsn2q" in "kube-system" namespace has status "Ready":"False"
	I1024 19:56:57.587118   61871 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gsn2q" in "kube-system" namespace has status "Ready":"False"
	I1024 19:56:58.605461   61522 pod_ready.go:102] pod "metrics-server-74d5856cc6-4pkrg" in "kube-system" namespace has status "Ready":"False"
	I1024 19:57:01.104528   61522 pod_ready.go:102] pod "metrics-server-74d5856cc6-4pkrg" in "kube-system" namespace has status "Ready":"False"
	I1024 19:56:58.382575   61295 pod_ready.go:102] pod "metrics-server-57f55c9bc5-n9drb" in "kube-system" namespace has status "Ready":"False"
	I1024 19:57:00.882736   61295 pod_ready.go:102] pod "metrics-server-57f55c9bc5-n9drb" in "kube-system" namespace has status "Ready":"False"
	I1024 19:56:59.587222   61871 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gsn2q" in "kube-system" namespace has status "Ready":"False"
	I1024 19:57:01.590980   61871 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gsn2q" in "kube-system" namespace has status "Ready":"False"
	I1024 19:57:03.105139   61522 pod_ready.go:102] pod "metrics-server-74d5856cc6-4pkrg" in "kube-system" namespace has status "Ready":"False"
	I1024 19:57:05.107062   61522 pod_ready.go:102] pod "metrics-server-74d5856cc6-4pkrg" in "kube-system" namespace has status "Ready":"False"
	I1024 19:57:03.382817   61295 pod_ready.go:102] pod "metrics-server-57f55c9bc5-n9drb" in "kube-system" namespace has status "Ready":"False"
	I1024 19:57:05.883507   61295 pod_ready.go:102] pod "metrics-server-57f55c9bc5-n9drb" in "kube-system" namespace has status "Ready":"False"
	I1024 19:57:04.088136   61871 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gsn2q" in "kube-system" namespace has status "Ready":"False"
	I1024 19:57:06.586669   61871 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gsn2q" in "kube-system" namespace has status "Ready":"False"
	I1024 19:57:08.587045   61871 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gsn2q" in "kube-system" namespace has status "Ready":"False"
	I1024 19:57:07.604823   61522 pod_ready.go:102] pod "metrics-server-74d5856cc6-4pkrg" in "kube-system" namespace has status "Ready":"False"
	I1024 19:57:09.605945   61522 pod_ready.go:102] pod "metrics-server-74d5856cc6-4pkrg" in "kube-system" namespace has status "Ready":"False"
	I1024 19:57:11.607738   61522 pod_ready.go:102] pod "metrics-server-74d5856cc6-4pkrg" in "kube-system" namespace has status "Ready":"False"
	I1024 19:57:08.382883   61295 pod_ready.go:102] pod "metrics-server-57f55c9bc5-n9drb" in "kube-system" namespace has status "Ready":"False"
	I1024 19:57:10.075760   61295 pod_ready.go:81] duration metric: took 4m0.000259515s waiting for pod "metrics-server-57f55c9bc5-n9drb" in "kube-system" namespace to be "Ready" ...
	E1024 19:57:10.075790   61295 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1024 19:57:10.075808   61295 pod_ready.go:38] duration metric: took 4m13.043039435s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1024 19:57:10.075833   61295 kubeadm.go:640] restartCluster took 4m32.762100643s
	W1024 19:57:10.075888   61295 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1024 19:57:10.075920   61295 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1024 19:57:10.590241   61871 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gsn2q" in "kube-system" namespace has status "Ready":"False"
	I1024 19:57:13.088083   61871 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gsn2q" in "kube-system" namespace has status "Ready":"False"
	I1024 19:57:14.105961   61522 pod_ready.go:102] pod "metrics-server-74d5856cc6-4pkrg" in "kube-system" namespace has status "Ready":"False"
	I1024 19:57:16.604943   61522 pod_ready.go:102] pod "metrics-server-74d5856cc6-4pkrg" in "kube-system" namespace has status "Ready":"False"
	I1024 19:57:15.587977   61871 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gsn2q" in "kube-system" namespace has status "Ready":"False"
	I1024 19:57:17.588131   61871 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gsn2q" in "kube-system" namespace has status "Ready":"False"
	I1024 19:57:19.343448   61295 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (9.267502933s)
	I1024 19:57:19.343535   61295 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1024 19:57:19.358911   61295 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1024 19:57:19.370531   61295 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1024 19:57:19.382291   61295 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1024 19:57:19.382329   61295 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1024 19:57:19.437416   61295 kubeadm.go:322] [init] Using Kubernetes version: v1.28.3
	I1024 19:57:19.437494   61295 kubeadm.go:322] [preflight] Running pre-flight checks
	I1024 19:57:19.616382   61295 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1024 19:57:19.616518   61295 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1024 19:57:19.616669   61295 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1024 19:57:19.993094   61295 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1024 19:57:19.995601   61295 out.go:204]   - Generating certificates and keys ...
	I1024 19:57:19.995694   61295 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1024 19:57:19.995835   61295 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1024 19:57:19.995929   61295 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1024 19:57:19.996011   61295 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1024 19:57:19.996127   61295 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1024 19:57:19.996209   61295 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1024 19:57:19.996299   61295 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1024 19:57:19.997279   61295 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1024 19:57:19.998145   61295 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1024 19:57:19.999568   61295 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1024 19:57:20.000512   61295 kubeadm.go:322] [certs] Using the existing "sa" key
	I1024 19:57:20.000607   61295 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1024 19:57:20.131456   61295 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1024 19:57:20.643120   61295 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1024 19:57:20.938491   61295 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1024 19:57:21.256325   61295 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1024 19:57:21.256616   61295 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1024 19:57:21.259196   61295 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1024 19:57:18.605682   61522 pod_ready.go:102] pod "metrics-server-74d5856cc6-4pkrg" in "kube-system" namespace has status "Ready":"False"
	I1024 19:57:21.104915   61522 pod_ready.go:102] pod "metrics-server-74d5856cc6-4pkrg" in "kube-system" namespace has status "Ready":"False"
	I1024 19:57:21.261074   61295 out.go:204]   - Booting up control plane ...
	I1024 19:57:21.261197   61295 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1024 19:57:21.261337   61295 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1024 19:57:21.261442   61295 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1024 19:57:21.282142   61295 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1024 19:57:21.282312   61295 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1024 19:57:21.282401   61295 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1024 19:57:21.415242   61295 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1024 19:57:20.088135   61871 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gsn2q" in "kube-system" namespace has status "Ready":"False"
	I1024 19:57:22.089519   61871 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gsn2q" in "kube-system" namespace has status "Ready":"False"
	I1024 19:57:23.105664   61522 pod_ready.go:102] pod "metrics-server-74d5856cc6-4pkrg" in "kube-system" namespace has status "Ready":"False"
	I1024 19:57:25.605064   61522 pod_ready.go:102] pod "metrics-server-74d5856cc6-4pkrg" in "kube-system" namespace has status "Ready":"False"
	I1024 19:57:24.587948   61871 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gsn2q" in "kube-system" namespace has status "Ready":"False"
	I1024 19:57:27.087704   61871 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gsn2q" in "kube-system" namespace has status "Ready":"False"
	I1024 19:57:28.918712   61295 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.503447 seconds
	I1024 19:57:28.918865   61295 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1024 19:57:28.937772   61295 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1024 19:57:29.470923   61295 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1024 19:57:29.471203   61295 kubeadm.go:322] [mark-control-plane] Marking the node no-preload-301948 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1024 19:57:29.987124   61295 kubeadm.go:322] [bootstrap-token] Using token: 9zhzch.2w656sy3xbo7yoze
	I1024 19:57:29.988569   61295 out.go:204]   - Configuring RBAC rules ...
	I1024 19:57:29.988721   61295 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1024 19:57:29.994955   61295 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1024 19:57:30.003489   61295 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1024 19:57:30.011605   61295 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1024 19:57:30.016145   61295 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1024 19:57:30.020086   61295 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1024 19:57:30.033961   61295 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1024 19:57:30.300576   61295 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1024 19:57:30.401953   61295 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1024 19:57:30.404489   61295 kubeadm.go:322] 
	I1024 19:57:30.404588   61295 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1024 19:57:30.404612   61295 kubeadm.go:322] 
	I1024 19:57:30.404703   61295 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1024 19:57:30.404712   61295 kubeadm.go:322] 
	I1024 19:57:30.404766   61295 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1024 19:57:30.404862   61295 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1024 19:57:30.404931   61295 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1024 19:57:30.404942   61295 kubeadm.go:322] 
	I1024 19:57:30.405010   61295 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1024 19:57:30.405021   61295 kubeadm.go:322] 
	I1024 19:57:30.405079   61295 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1024 19:57:30.405089   61295 kubeadm.go:322] 
	I1024 19:57:30.405151   61295 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1024 19:57:30.405251   61295 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1024 19:57:30.405418   61295 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1024 19:57:30.405432   61295 kubeadm.go:322] 
	I1024 19:57:30.405566   61295 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1024 19:57:30.405676   61295 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1024 19:57:30.405688   61295 kubeadm.go:322] 
	I1024 19:57:30.405790   61295 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 9zhzch.2w656sy3xbo7yoze \
	I1024 19:57:30.405904   61295 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:2a4e6b3b2fbae5152c11b55fbfca3a5c4a76f76bef7b073915d1f37206892a8a \
	I1024 19:57:30.405946   61295 kubeadm.go:322] 	--control-plane 
	I1024 19:57:30.405956   61295 kubeadm.go:322] 
	I1024 19:57:30.406083   61295 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1024 19:57:30.406091   61295 kubeadm.go:322] 
	I1024 19:57:30.406178   61295 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 9zhzch.2w656sy3xbo7yoze \
	I1024 19:57:30.406346   61295 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:2a4e6b3b2fbae5152c11b55fbfca3a5c4a76f76bef7b073915d1f37206892a8a 
	I1024 19:57:30.406475   61295 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1024 19:57:30.406489   61295 cni.go:84] Creating CNI manager for ""
	I1024 19:57:30.406508   61295 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1024 19:57:30.409619   61295 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1024 19:57:28.105352   61522 pod_ready.go:102] pod "metrics-server-74d5856cc6-4pkrg" in "kube-system" namespace has status "Ready":"False"
	I1024 19:57:30.606820   61522 pod_ready.go:102] pod "metrics-server-74d5856cc6-4pkrg" in "kube-system" namespace has status "Ready":"False"
	I1024 19:57:30.411060   61295 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1024 19:57:30.423043   61295 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1024 19:57:30.441345   61295 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1024 19:57:30.441425   61295 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:57:30.441452   61295 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=88664ba50fde9b6a83229504adac261395c47fca minikube.k8s.io/name=no-preload-301948 minikube.k8s.io/updated_at=2023_10_24T19_57_30_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:57:31.002957   61295 ops.go:34] apiserver oom_adj: -16
	I1024 19:57:31.003112   61295 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:57:31.100527   61295 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:57:31.700734   61295 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:57:32.200660   61295 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:57:29.088673   61871 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gsn2q" in "kube-system" namespace has status "Ready":"False"
	I1024 19:57:31.586664   61871 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gsn2q" in "kube-system" namespace has status "Ready":"False"
	I1024 19:57:33.586933   61871 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gsn2q" in "kube-system" namespace has status "Ready":"False"
	I1024 19:57:33.106081   61522 pod_ready.go:102] pod "metrics-server-74d5856cc6-4pkrg" in "kube-system" namespace has status "Ready":"False"
	I1024 19:57:35.605454   61522 pod_ready.go:102] pod "metrics-server-74d5856cc6-4pkrg" in "kube-system" namespace has status "Ready":"False"
	I1024 19:57:32.701156   61295 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:57:33.200904   61295 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:57:33.701331   61295 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:57:34.201427   61295 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:57:34.701493   61295 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:57:35.200672   61295 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:57:35.701373   61295 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:57:36.200813   61295 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:57:36.701407   61295 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:57:37.200831   61295 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:57:35.587726   61871 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gsn2q" in "kube-system" namespace has status "Ready":"False"
	I1024 19:57:37.587787   61871 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gsn2q" in "kube-system" namespace has status "Ready":"False"
	I1024 19:57:38.104964   61522 pod_ready.go:102] pod "metrics-server-74d5856cc6-4pkrg" in "kube-system" namespace has status "Ready":"False"
	I1024 19:57:40.105744   61522 pod_ready.go:102] pod "metrics-server-74d5856cc6-4pkrg" in "kube-system" namespace has status "Ready":"False"
	I1024 19:57:37.701298   61295 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:57:38.200827   61295 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:57:38.701420   61295 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:57:39.201094   61295 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:57:39.701213   61295 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:57:40.200882   61295 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:57:40.701396   61295 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:57:41.201526   61295 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:57:41.700682   61295 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:57:42.200722   61295 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:57:42.380973   61295 kubeadm.go:1081] duration metric: took 11.939614749s to wait for elevateKubeSystemPrivileges.
	I1024 19:57:42.381004   61295 kubeadm.go:406] StartCluster complete in 5m5.100233821s
	I1024 19:57:42.381024   61295 settings.go:142] acquiring lock: {Name:mk36c78ae5c888974883b83cd211b07900a5571c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 19:57:42.381106   61295 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17485-9104/kubeconfig
	I1024 19:57:42.382077   61295 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17485-9104/kubeconfig: {Name:mk3f1a292620d31d01e0540e90dfb98008d8ef4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 19:57:42.382326   61295 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1024 19:57:42.382454   61295 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1024 19:57:42.382530   61295 config.go:182] Loaded profile config "no-preload-301948": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1024 19:57:42.382543   61295 addons.go:69] Setting metrics-server=true in profile "no-preload-301948"
	I1024 19:57:42.382548   61295 addons.go:69] Setting default-storageclass=true in profile "no-preload-301948"
	I1024 19:57:42.382562   61295 addons.go:231] Setting addon metrics-server=true in "no-preload-301948"
	I1024 19:57:42.382573   61295 addons.go:69] Setting dashboard=true in profile "no-preload-301948"
	W1024 19:57:42.382576   61295 addons.go:240] addon metrics-server should already be in state true
	I1024 19:57:42.382589   61295 addons.go:231] Setting addon dashboard=true in "no-preload-301948"
	I1024 19:57:42.382595   61295 cache.go:107] acquiring lock: {Name:mke60744a4234b419c1d64d246b94fc561986c72 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	W1024 19:57:42.382614   61295 addons.go:240] addon dashboard should already be in state true
	I1024 19:57:42.382535   61295 addons.go:69] Setting storage-provisioner=true in profile "no-preload-301948"
	I1024 19:57:42.382643   61295 addons.go:231] Setting addon storage-provisioner=true in "no-preload-301948"
	I1024 19:57:42.382651   61295 cache.go:115] /home/jenkins/minikube-integration/17485-9104/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2 exists
	W1024 19:57:42.382656   61295 addons.go:240] addon storage-provisioner should already be in state true
	I1024 19:57:42.382566   61295 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-301948"
	I1024 19:57:42.382704   61295 host.go:66] Checking if "no-preload-301948" exists ...
	I1024 19:57:42.382660   61295 host.go:66] Checking if "no-preload-301948" exists ...
	I1024 19:57:42.383111   61295 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1024 19:57:42.383115   61295 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1024 19:57:42.383132   61295 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:57:42.383140   61295 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:57:42.383166   61295 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1024 19:57:42.383190   61295 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:57:42.382622   61295 host.go:66] Checking if "no-preload-301948" exists ...
	I1024 19:57:42.382659   61295 cache.go:96] cache image "gcr.io/k8s-minikube/gvisor-addon:2" -> "/home/jenkins/minikube-integration/17485-9104/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2" took 70.07µs
	I1024 19:57:42.383238   61295 cache.go:80] save to tar file gcr.io/k8s-minikube/gvisor-addon:2 -> /home/jenkins/minikube-integration/17485-9104/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2 succeeded
	I1024 19:57:42.383247   61295 cache.go:87] Successfully saved all images to host disk.
	I1024 19:57:42.383513   61295 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1024 19:57:42.383534   61295 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:57:42.383687   61295 config.go:182] Loaded profile config "no-preload-301948": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1024 19:57:42.384027   61295 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1024 19:57:42.384056   61295 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:57:42.401424   61295 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45627
	I1024 19:57:42.401647   61295 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40339
	I1024 19:57:42.401785   61295 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37839
	I1024 19:57:42.401799   61295 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36687
	I1024 19:57:42.401937   61295 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:57:42.402063   61295 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:57:42.402321   61295 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:57:42.402387   61295 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:57:42.402543   61295 main.go:141] libmachine: Using API Version  1
	I1024 19:57:42.402554   61295 main.go:141] libmachine: Using API Version  1
	I1024 19:57:42.402556   61295 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:57:42.402571   61295 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:57:42.402736   61295 main.go:141] libmachine: Using API Version  1
	I1024 19:57:42.402759   61295 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:57:42.402907   61295 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:57:42.402921   61295 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:57:42.403329   61295 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:57:42.403337   61295 main.go:141] libmachine: (no-preload-301948) Calling .GetState
	I1024 19:57:42.403391   61295 main.go:141] libmachine: (no-preload-301948) Calling .GetState
	I1024 19:57:42.403397   61295 main.go:141] libmachine: Using API Version  1
	I1024 19:57:42.403411   61295 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:57:42.404368   61295 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:57:42.404561   61295 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1024 19:57:42.404585   61295 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:57:42.405000   61295 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1024 19:57:42.405033   61295 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:57:42.406737   61295 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1024 19:57:42.406767   61295 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:57:42.407698   61295 addons.go:231] Setting addon default-storageclass=true in "no-preload-301948"
	W1024 19:57:42.407732   61295 addons.go:240] addon default-storageclass should already be in state true
	I1024 19:57:42.407759   61295 host.go:66] Checking if "no-preload-301948" exists ...
	I1024 19:57:42.407701   61295 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46641
	I1024 19:57:42.408124   61295 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:57:42.408134   61295 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1024 19:57:42.408155   61295 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:57:42.408557   61295 main.go:141] libmachine: Using API Version  1
	I1024 19:57:42.408584   61295 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:57:42.408920   61295 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:57:42.409762   61295 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1024 19:57:42.409788   61295 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:57:42.421756   61295 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34037
	I1024 19:57:42.422258   61295 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:57:42.422735   61295 main.go:141] libmachine: Using API Version  1
	I1024 19:57:42.422759   61295 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:57:42.423082   61295 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:57:42.423258   61295 main.go:141] libmachine: (no-preload-301948) Calling .GetState
	I1024 19:57:42.425224   61295 main.go:141] libmachine: (no-preload-301948) Calling .DriverName
	I1024 19:57:42.427096   61295 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1024 19:57:42.426055   61295 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39751
	I1024 19:57:42.427293   61295 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38671
	I1024 19:57:42.428662   61295 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1024 19:57:42.428674   61295 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1024 19:57:42.428692   61295 main.go:141] libmachine: (no-preload-301948) Calling .GetSSHHostname
	I1024 19:57:42.427515   61295 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:57:42.429113   61295 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:57:42.429652   61295 main.go:141] libmachine: Using API Version  1
	I1024 19:57:42.429674   61295 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:57:42.429829   61295 main.go:141] libmachine: Using API Version  1
	I1024 19:57:42.429845   61295 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:57:42.430178   61295 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:57:42.430381   61295 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:57:42.430412   61295 main.go:141] libmachine: (no-preload-301948) Calling .DriverName
	I1024 19:57:42.430653   61295 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1024 19:57:42.430680   61295 main.go:141] libmachine: (no-preload-301948) Calling .GetSSHHostname
	I1024 19:57:42.430736   61295 main.go:141] libmachine: (no-preload-301948) Calling .GetState
	I1024 19:57:42.433333   61295 main.go:141] libmachine: (no-preload-301948) Calling .DriverName
	I1024 19:57:42.433447   61295 main.go:141] libmachine: (no-preload-301948) DBG | domain no-preload-301948 has defined MAC address 52:54:00:ea:e5:04 in network mk-no-preload-301948
	I1024 19:57:42.435220   61295 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1024 19:57:42.433918   61295 main.go:141] libmachine: (no-preload-301948) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:e5:04", ip: ""} in network mk-no-preload-301948: {Iface:virbr2 ExpiryTime:2023-10-24 20:52:25 +0000 UTC Type:0 Mac:52:54:00:ea:e5:04 Iaid: IPaddr:192.168.50.15 Prefix:24 Hostname:no-preload-301948 Clientid:01:52:54:00:ea:e5:04}
	I1024 19:57:42.436967   61295 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1024 19:57:42.436981   61295 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1024 19:57:42.436995   61295 main.go:141] libmachine: (no-preload-301948) Calling .GetSSHHostname
	I1024 19:57:42.434097   61295 main.go:141] libmachine: (no-preload-301948) Calling .GetSSHPort
	I1024 19:57:42.434656   61295 main.go:141] libmachine: (no-preload-301948) DBG | domain no-preload-301948 has defined MAC address 52:54:00:ea:e5:04 in network mk-no-preload-301948
	I1024 19:57:42.435089   61295 main.go:141] libmachine: (no-preload-301948) Calling .GetSSHPort
	I1024 19:57:42.437057   61295 main.go:141] libmachine: (no-preload-301948) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:e5:04", ip: ""} in network mk-no-preload-301948: {Iface:virbr2 ExpiryTime:2023-10-24 20:52:25 +0000 UTC Type:0 Mac:52:54:00:ea:e5:04 Iaid: IPaddr:192.168.50.15 Prefix:24 Hostname:no-preload-301948 Clientid:01:52:54:00:ea:e5:04}
	I1024 19:57:42.435249   61295 main.go:141] libmachine: (no-preload-301948) DBG | domain no-preload-301948 has defined IP address 192.168.50.15 and MAC address 52:54:00:ea:e5:04 in network mk-no-preload-301948
	I1024 19:57:42.437078   61295 main.go:141] libmachine: (no-preload-301948) DBG | domain no-preload-301948 has defined IP address 192.168.50.15 and MAC address 52:54:00:ea:e5:04 in network mk-no-preload-301948
	I1024 19:57:42.437227   61295 main.go:141] libmachine: (no-preload-301948) Calling .GetSSHKeyPath
	I1024 19:57:42.437251   61295 main.go:141] libmachine: (no-preload-301948) Calling .GetSSHKeyPath
	I1024 19:57:42.437397   61295 main.go:141] libmachine: (no-preload-301948) Calling .GetSSHUsername
	I1024 19:57:42.437405   61295 main.go:141] libmachine: (no-preload-301948) Calling .GetSSHUsername
	I1024 19:57:42.437537   61295 sshutil.go:53] new ssh client: &{IP:192.168.50.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9104/.minikube/machines/no-preload-301948/id_rsa Username:docker}
	I1024 19:57:42.437543   61295 sshutil.go:53] new ssh client: &{IP:192.168.50.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9104/.minikube/machines/no-preload-301948/id_rsa Username:docker}
	I1024 19:57:42.438679   61295 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41147
	I1024 19:57:42.439095   61295 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:57:42.439605   61295 main.go:141] libmachine: Using API Version  1
	I1024 19:57:42.439625   61295 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:57:42.439920   61295 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:57:42.440111   61295 main.go:141] libmachine: (no-preload-301948) Calling .GetState
	I1024 19:57:42.441793   61295 main.go:141] libmachine: (no-preload-301948) DBG | domain no-preload-301948 has defined MAC address 52:54:00:ea:e5:04 in network mk-no-preload-301948
	I1024 19:57:42.441837   61295 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39341
	I1024 19:57:42.441992   61295 main.go:141] libmachine: (no-preload-301948) Calling .DriverName
	I1024 19:57:42.442293   61295 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:57:42.446094   61295 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1024 19:57:42.442837   61295 main.go:141] libmachine: Using API Version  1
	I1024 19:57:42.442875   61295 main.go:141] libmachine: (no-preload-301948) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:e5:04", ip: ""} in network mk-no-preload-301948: {Iface:virbr2 ExpiryTime:2023-10-24 20:52:25 +0000 UTC Type:0 Mac:52:54:00:ea:e5:04 Iaid: IPaddr:192.168.50.15 Prefix:24 Hostname:no-preload-301948 Clientid:01:52:54:00:ea:e5:04}
	I1024 19:57:42.443031   61295 main.go:141] libmachine: (no-preload-301948) Calling .GetSSHPort
	I1024 19:57:42.447313   61295 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-301948" context rescaled to 1 replicas
	I1024 19:57:42.449014   61295 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I1024 19:57:42.447726   61295 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.15 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1024 19:57:42.447760   61295 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:57:42.447782   61295 main.go:141] libmachine: (no-preload-301948) DBG | domain no-preload-301948 has defined IP address 192.168.50.15 and MAC address 52:54:00:ea:e5:04 in network mk-no-preload-301948
	I1024 19:57:42.447979   61295 main.go:141] libmachine: (no-preload-301948) Calling .GetSSHKeyPath
	I1024 19:57:42.451628   61295 out.go:177] * Verifying Kubernetes components...
	I1024 19:57:42.450300   61295 addons.go:423] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1024 19:57:42.450396   61295 main.go:141] libmachine: (no-preload-301948) Calling .GetSSHUsername
	I1024 19:57:42.450760   61295 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:57:42.452957   61295 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1024 19:57:42.452985   61295 main.go:141] libmachine: (no-preload-301948) Calling .GetSSHHostname
	I1024 19:57:42.453026   61295 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1024 19:57:42.453167   61295 sshutil.go:53] new ssh client: &{IP:192.168.50.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9104/.minikube/machines/no-preload-301948/id_rsa Username:docker}
	I1024 19:57:42.454268   61295 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1024 19:57:42.454291   61295 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:57:42.456190   61295 main.go:141] libmachine: (no-preload-301948) DBG | domain no-preload-301948 has defined MAC address 52:54:00:ea:e5:04 in network mk-no-preload-301948
	I1024 19:57:42.456643   61295 main.go:141] libmachine: (no-preload-301948) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:e5:04", ip: ""} in network mk-no-preload-301948: {Iface:virbr2 ExpiryTime:2023-10-24 20:52:25 +0000 UTC Type:0 Mac:52:54:00:ea:e5:04 Iaid: IPaddr:192.168.50.15 Prefix:24 Hostname:no-preload-301948 Clientid:01:52:54:00:ea:e5:04}
	I1024 19:57:42.456674   61295 main.go:141] libmachine: (no-preload-301948) DBG | domain no-preload-301948 has defined IP address 192.168.50.15 and MAC address 52:54:00:ea:e5:04 in network mk-no-preload-301948
	I1024 19:57:42.456842   61295 main.go:141] libmachine: (no-preload-301948) Calling .GetSSHPort
	I1024 19:57:42.457013   61295 main.go:141] libmachine: (no-preload-301948) Calling .GetSSHKeyPath
	I1024 19:57:42.457136   61295 main.go:141] libmachine: (no-preload-301948) Calling .GetSSHUsername
	I1024 19:57:42.457238   61295 sshutil.go:53] new ssh client: &{IP:192.168.50.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9104/.minikube/machines/no-preload-301948/id_rsa Username:docker}
	I1024 19:57:42.470749   61295 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34245
	I1024 19:57:42.471186   61295 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:57:42.471570   61295 main.go:141] libmachine: Using API Version  1
	I1024 19:57:42.471582   61295 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:57:42.471941   61295 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:57:42.472134   61295 main.go:141] libmachine: (no-preload-301948) Calling .GetState
	I1024 19:57:42.473824   61295 main.go:141] libmachine: (no-preload-301948) Calling .DriverName
	I1024 19:57:42.474093   61295 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1024 19:57:42.474105   61295 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1024 19:57:42.474118   61295 main.go:141] libmachine: (no-preload-301948) Calling .GetSSHHostname
	I1024 19:57:42.477350   61295 main.go:141] libmachine: (no-preload-301948) DBG | domain no-preload-301948 has defined MAC address 52:54:00:ea:e5:04 in network mk-no-preload-301948
	I1024 19:57:42.477755   61295 main.go:141] libmachine: (no-preload-301948) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:e5:04", ip: ""} in network mk-no-preload-301948: {Iface:virbr2 ExpiryTime:2023-10-24 20:52:25 +0000 UTC Type:0 Mac:52:54:00:ea:e5:04 Iaid: IPaddr:192.168.50.15 Prefix:24 Hostname:no-preload-301948 Clientid:01:52:54:00:ea:e5:04}
	I1024 19:57:42.477786   61295 main.go:141] libmachine: (no-preload-301948) DBG | domain no-preload-301948 has defined IP address 192.168.50.15 and MAC address 52:54:00:ea:e5:04 in network mk-no-preload-301948
	I1024 19:57:42.477934   61295 main.go:141] libmachine: (no-preload-301948) Calling .GetSSHPort
	I1024 19:57:42.478142   61295 main.go:141] libmachine: (no-preload-301948) Calling .GetSSHKeyPath
	I1024 19:57:42.478297   61295 main.go:141] libmachine: (no-preload-301948) Calling .GetSSHUsername
	I1024 19:57:42.478421   61295 sshutil.go:53] new ssh client: &{IP:192.168.50.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9104/.minikube/machines/no-preload-301948/id_rsa Username:docker}
	I1024 19:57:40.087823   61871 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gsn2q" in "kube-system" namespace has status "Ready":"False"
	I1024 19:57:42.586845   61871 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gsn2q" in "kube-system" namespace has status "Ready":"False"
	I1024 19:57:42.606457   61522 pod_ready.go:102] pod "metrics-server-74d5856cc6-4pkrg" in "kube-system" namespace has status "Ready":"False"
	I1024 19:57:45.105666   61522 pod_ready.go:102] pod "metrics-server-74d5856cc6-4pkrg" in "kube-system" namespace has status "Ready":"False"
	I1024 19:57:42.713944   61295 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1024 19:57:42.713969   61295 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1024 19:57:42.748916   61295 addons.go:423] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1024 19:57:42.748945   61295 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1024 19:57:42.751355   61295 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1024 19:57:42.752186   61295 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1024 19:57:42.795341   61295 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1024 19:57:42.795367   61295 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1024 19:57:42.827739   61295 addons.go:423] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1024 19:57:42.827766   61295 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1024 19:57:42.862168   61295 node_ready.go:35] waiting up to 6m0s for node "no-preload-301948" to be "Ready" ...
	I1024 19:57:42.862320   61295 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1024 19:57:42.862319   61295 docker.go:689] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/gvisor-addon:2
	registry.k8s.io/kube-apiserver:v1.28.3
	registry.k8s.io/kube-scheduler:v1.28.3
	registry.k8s.io/kube-controller-manager:v1.28.3
	registry.k8s.io/kube-proxy:v1.28.3
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I1024 19:57:42.862388   61295 cache_images.go:84] Images are preloaded, skipping loading
	I1024 19:57:42.862397   61295 cache_images.go:262] succeeded pushing to: no-preload-301948
	I1024 19:57:42.862404   61295 cache_images.go:263] failed pushing to: 
	I1024 19:57:42.862423   61295 main.go:141] libmachine: Making call to close driver server
	I1024 19:57:42.862438   61295 main.go:141] libmachine: (no-preload-301948) Calling .Close
	I1024 19:57:42.862676   61295 main.go:141] libmachine: (no-preload-301948) DBG | Closing plugin on server side
	I1024 19:57:42.862714   61295 main.go:141] libmachine: Successfully made call to close driver server
	I1024 19:57:42.862726   61295 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 19:57:42.862741   61295 main.go:141] libmachine: Making call to close driver server
	I1024 19:57:42.862765   61295 main.go:141] libmachine: (no-preload-301948) Calling .Close
	I1024 19:57:42.863051   61295 main.go:141] libmachine: Successfully made call to close driver server
	I1024 19:57:42.863099   61295 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 19:57:42.863071   61295 main.go:141] libmachine: (no-preload-301948) DBG | Closing plugin on server side
	I1024 19:57:42.866241   61295 node_ready.go:49] node "no-preload-301948" has status "Ready":"True"
	I1024 19:57:42.866266   61295 node_ready.go:38] duration metric: took 4.073285ms waiting for node "no-preload-301948" to be "Ready" ...
	I1024 19:57:42.866279   61295 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1024 19:57:42.873100   61295 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-7mr9d" in "kube-system" namespace to be "Ready" ...
	I1024 19:57:42.915636   61295 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1024 19:57:42.915683   61295 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1024 19:57:42.990886   61295 addons.go:423] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1024 19:57:42.990911   61295 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1024 19:57:43.056583   61295 addons.go:423] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1024 19:57:43.056602   61295 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1024 19:57:43.130738   61295 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1024 19:57:43.227448   61295 addons.go:423] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1024 19:57:43.227476   61295 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1024 19:57:43.350804   61295 addons.go:423] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1024 19:57:43.350832   61295 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1024 19:57:43.547139   61295 addons.go:423] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1024 19:57:43.547158   61295 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1024 19:57:43.666927   61295 addons.go:423] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1024 19:57:43.666953   61295 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1024 19:57:43.711161   61295 addons.go:423] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1024 19:57:43.711184   61295 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1024 19:57:43.746260   61295 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1024 19:57:44.393304   61295 pod_ready.go:97] error getting pod "coredns-5dd5756b68-7mr9d" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-7mr9d" not found
	I1024 19:57:44.393337   61295 pod_ready.go:81] duration metric: took 1.520215826s waiting for pod "coredns-5dd5756b68-7mr9d" in "kube-system" namespace to be "Ready" ...
	E1024 19:57:44.393351   61295 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5dd5756b68-7mr9d" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-7mr9d" not found
	I1024 19:57:44.393361   61295 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-vwxxt" in "kube-system" namespace to be "Ready" ...
	I1024 19:57:45.415206   61295 pod_ready.go:92] pod "coredns-5dd5756b68-vwxxt" in "kube-system" namespace has status "Ready":"True"
	I1024 19:57:45.415236   61295 pod_ready.go:81] duration metric: took 1.021867231s waiting for pod "coredns-5dd5756b68-vwxxt" in "kube-system" namespace to be "Ready" ...
	I1024 19:57:45.415251   61295 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-301948" in "kube-system" namespace to be "Ready" ...
	I1024 19:57:45.426550   61295 pod_ready.go:92] pod "etcd-no-preload-301948" in "kube-system" namespace has status "Ready":"True"
	I1024 19:57:45.426578   61295 pod_ready.go:81] duration metric: took 11.318063ms waiting for pod "etcd-no-preload-301948" in "kube-system" namespace to be "Ready" ...
	I1024 19:57:45.426591   61295 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-301948" in "kube-system" namespace to be "Ready" ...
	I1024 19:57:45.466361   61295 pod_ready.go:92] pod "kube-apiserver-no-preload-301948" in "kube-system" namespace has status "Ready":"True"
	I1024 19:57:45.466388   61295 pod_ready.go:81] duration metric: took 39.787934ms waiting for pod "kube-apiserver-no-preload-301948" in "kube-system" namespace to be "Ready" ...
	I1024 19:57:45.466400   61295 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-301948" in "kube-system" namespace to be "Ready" ...
	I1024 19:57:45.674513   61295 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.923126291s)
	I1024 19:57:45.674544   61295 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.812196557s)
	I1024 19:57:45.674559   61295 main.go:141] libmachine: Making call to close driver server
	I1024 19:57:45.674563   61295 start.go:926] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I1024 19:57:45.674571   61295 main.go:141] libmachine: (no-preload-301948) Calling .Close
	I1024 19:57:45.674656   61295 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.922449796s)
	I1024 19:57:45.674697   61295 main.go:141] libmachine: Making call to close driver server
	I1024 19:57:45.674708   61295 main.go:141] libmachine: (no-preload-301948) Calling .Close
	I1024 19:57:45.674884   61295 main.go:141] libmachine: Successfully made call to close driver server
	I1024 19:57:45.674898   61295 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 19:57:45.674908   61295 main.go:141] libmachine: Making call to close driver server
	I1024 19:57:45.674916   61295 main.go:141] libmachine: (no-preload-301948) Calling .Close
	I1024 19:57:45.675270   61295 main.go:141] libmachine: Successfully made call to close driver server
	I1024 19:57:45.675290   61295 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 19:57:45.675300   61295 main.go:141] libmachine: Making call to close driver server
	I1024 19:57:45.675316   61295 main.go:141] libmachine: (no-preload-301948) Calling .Close
	I1024 19:57:45.675270   61295 main.go:141] libmachine: (no-preload-301948) DBG | Closing plugin on server side
	I1024 19:57:45.675378   61295 main.go:141] libmachine: (no-preload-301948) DBG | Closing plugin on server side
	I1024 19:57:45.675400   61295 main.go:141] libmachine: Successfully made call to close driver server
	I1024 19:57:45.675409   61295 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 19:57:45.675666   61295 main.go:141] libmachine: Successfully made call to close driver server
	I1024 19:57:45.675680   61295 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 19:57:45.712634   61295 main.go:141] libmachine: Making call to close driver server
	I1024 19:57:45.712662   61295 main.go:141] libmachine: (no-preload-301948) Calling .Close
	I1024 19:57:45.712959   61295 main.go:141] libmachine: Successfully made call to close driver server
	I1024 19:57:45.712974   61295 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 19:57:45.897524   61295 pod_ready.go:92] pod "kube-controller-manager-no-preload-301948" in "kube-system" namespace has status "Ready":"True"
	I1024 19:57:45.897546   61295 pod_ready.go:81] duration metric: took 431.138609ms waiting for pod "kube-controller-manager-no-preload-301948" in "kube-system" namespace to be "Ready" ...
	I1024 19:57:45.897556   61295 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-9nbd7" in "kube-system" namespace to be "Ready" ...
	I1024 19:57:45.991908   61295 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.861135361s)
	I1024 19:57:45.991955   61295 main.go:141] libmachine: Making call to close driver server
	I1024 19:57:45.991963   61295 main.go:141] libmachine: (no-preload-301948) Calling .Close
	I1024 19:57:45.992268   61295 main.go:141] libmachine: Successfully made call to close driver server
	I1024 19:57:45.992330   61295 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 19:57:45.992343   61295 main.go:141] libmachine: (no-preload-301948) DBG | Closing plugin on server side
	I1024 19:57:45.992354   61295 main.go:141] libmachine: Making call to close driver server
	I1024 19:57:45.992365   61295 main.go:141] libmachine: (no-preload-301948) Calling .Close
	I1024 19:57:45.992639   61295 main.go:141] libmachine: Successfully made call to close driver server
	I1024 19:57:45.992655   61295 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 19:57:45.992665   61295 addons.go:467] Verifying addon metrics-server=true in "no-preload-301948"
	I1024 19:57:46.270757   61295 pod_ready.go:92] pod "kube-proxy-9nbd7" in "kube-system" namespace has status "Ready":"True"
	I1024 19:57:46.270787   61295 pod_ready.go:81] duration metric: took 373.223895ms waiting for pod "kube-proxy-9nbd7" in "kube-system" namespace to be "Ready" ...
	I1024 19:57:46.270801   61295 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-301948" in "kube-system" namespace to be "Ready" ...
	I1024 19:57:46.670694   61295 pod_ready.go:92] pod "kube-scheduler-no-preload-301948" in "kube-system" namespace has status "Ready":"True"
	I1024 19:57:46.670720   61295 pod_ready.go:81] duration metric: took 399.909951ms waiting for pod "kube-scheduler-no-preload-301948" in "kube-system" namespace to be "Ready" ...
	I1024 19:57:46.670732   61295 pod_ready.go:38] duration metric: took 3.804438073s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1024 19:57:46.670767   61295 api_server.go:52] waiting for apiserver process to appear ...
	I1024 19:57:46.670822   61295 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 19:57:46.976771   61295 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (3.230465716s)
	I1024 19:57:46.976815   61295 api_server.go:72] duration metric: took 4.526567373s to wait for apiserver process to appear ...
	I1024 19:57:46.976826   61295 main.go:141] libmachine: Making call to close driver server
	I1024 19:57:46.976830   61295 api_server.go:88] waiting for apiserver healthz status ...
	I1024 19:57:46.976836   61295 main.go:141] libmachine: (no-preload-301948) Calling .Close
	I1024 19:57:46.976846   61295 api_server.go:253] Checking apiserver healthz at https://192.168.50.15:8443/healthz ...
	I1024 19:57:46.977177   61295 main.go:141] libmachine: Successfully made call to close driver server
	I1024 19:57:46.977196   61295 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 19:57:46.977221   61295 main.go:141] libmachine: Making call to close driver server
	I1024 19:57:46.977255   61295 main.go:141] libmachine: (no-preload-301948) Calling .Close
	I1024 19:57:46.978105   61295 main.go:141] libmachine: (no-preload-301948) DBG | Closing plugin on server side
	I1024 19:57:46.978114   61295 main.go:141] libmachine: Successfully made call to close driver server
	I1024 19:57:46.978128   61295 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 19:57:46.980037   61295 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-301948 addons enable metrics-server	
	
	
	I1024 19:57:46.981722   61295 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I1024 19:57:46.983146   61295 addons.go:502] enable addons completed in 4.600694377s: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I1024 19:57:46.987011   61295 api_server.go:279] https://192.168.50.15:8443/healthz returned 200:
	ok
	I1024 19:57:46.988151   61295 api_server.go:141] control plane version: v1.28.3
	I1024 19:57:46.988178   61295 api_server.go:131] duration metric: took 11.341805ms to wait for apiserver health ...
	I1024 19:57:46.988187   61295 system_pods.go:43] waiting for kube-system pods to appear ...
	I1024 19:57:47.001316   61295 system_pods.go:59] 8 kube-system pods found
	I1024 19:57:47.001340   61295 system_pods.go:61] "coredns-5dd5756b68-vwxxt" [97aeec25-8055-4a3f-9136-31f31e11cdda] Running
	I1024 19:57:47.001347   61295 system_pods.go:61] "etcd-no-preload-301948" [fed2b6d5-6f7a-4dd5-b733-25f28e7af9d3] Running
	I1024 19:57:47.001354   61295 system_pods.go:61] "kube-apiserver-no-preload-301948" [e23f0cd7-860e-4446-b06a-3449dc808ae5] Running
	I1024 19:57:47.001360   61295 system_pods.go:61] "kube-controller-manager-no-preload-301948" [ac76e302-a455-4295-a58c-890cd201a95b] Running
	I1024 19:57:47.001365   61295 system_pods.go:61] "kube-proxy-9nbd7" [b946262d-9be0-4379-8814-90984cf2e9fb] Running
	I1024 19:57:47.001372   61295 system_pods.go:61] "kube-scheduler-no-preload-301948" [7a913083-0666-4016-9414-43049e764a7a] Running
	I1024 19:57:47.001382   61295 system_pods.go:61] "metrics-server-57f55c9bc5-jrxpv" [d6c5d6da-4b07-4020-a3ec-5a1b3b90118a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 19:57:47.001399   61295 system_pods.go:61] "storage-provisioner" [2c639317-0000-4bdc-be34-e19339d1cf2f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1024 19:57:47.001411   61295 system_pods.go:74] duration metric: took 13.213472ms to wait for pod list to return data ...
	I1024 19:57:47.001424   61295 default_sa.go:34] waiting for default service account to be created ...
	I1024 19:57:47.076534   61295 default_sa.go:45] found service account: "default"
	I1024 19:57:47.076566   61295 default_sa.go:55] duration metric: took 75.134194ms for default service account to be created ...
	I1024 19:57:47.076579   61295 system_pods.go:116] waiting for k8s-apps to be running ...
	I1024 19:57:47.272838   61295 system_pods.go:86] 8 kube-system pods found
	I1024 19:57:47.272871   61295 system_pods.go:89] "coredns-5dd5756b68-vwxxt" [97aeec25-8055-4a3f-9136-31f31e11cdda] Running
	I1024 19:57:47.272880   61295 system_pods.go:89] "etcd-no-preload-301948" [fed2b6d5-6f7a-4dd5-b733-25f28e7af9d3] Running
	I1024 19:57:47.272887   61295 system_pods.go:89] "kube-apiserver-no-preload-301948" [e23f0cd7-860e-4446-b06a-3449dc808ae5] Running
	I1024 19:57:47.272894   61295 system_pods.go:89] "kube-controller-manager-no-preload-301948" [ac76e302-a455-4295-a58c-890cd201a95b] Running
	I1024 19:57:47.272900   61295 system_pods.go:89] "kube-proxy-9nbd7" [b946262d-9be0-4379-8814-90984cf2e9fb] Running
	I1024 19:57:47.272908   61295 system_pods.go:89] "kube-scheduler-no-preload-301948" [7a913083-0666-4016-9414-43049e764a7a] Running
	I1024 19:57:47.272918   61295 system_pods.go:89] "metrics-server-57f55c9bc5-jrxpv" [d6c5d6da-4b07-4020-a3ec-5a1b3b90118a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 19:57:47.272933   61295 system_pods.go:89] "storage-provisioner" [2c639317-0000-4bdc-be34-e19339d1cf2f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1024 19:57:47.272942   61295 system_pods.go:126] duration metric: took 196.357023ms to wait for k8s-apps to be running ...
	I1024 19:57:47.272955   61295 system_svc.go:44] waiting for kubelet service to be running ....
	I1024 19:57:47.273010   61295 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1024 19:57:47.297537   61295 system_svc.go:56] duration metric: took 24.571285ms WaitForService to wait for kubelet.
	I1024 19:57:47.297562   61295 kubeadm.go:581] duration metric: took 4.847318654s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1024 19:57:47.297578   61295 node_conditions.go:102] verifying NodePressure condition ...
	I1024 19:57:47.466456   61295 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1024 19:57:47.466487   61295 node_conditions.go:123] node cpu capacity is 2
	I1024 19:57:47.466499   61295 node_conditions.go:105] duration metric: took 168.916692ms to run NodePressure ...
	I1024 19:57:47.466514   61295 start.go:228] waiting for startup goroutines ...
	I1024 19:57:47.466523   61295 start.go:233] waiting for cluster config update ...
	I1024 19:57:47.466536   61295 start.go:242] writing updated cluster config ...
	I1024 19:57:47.466790   61295 ssh_runner.go:195] Run: rm -f paused
	I1024 19:57:47.530977   61295 start.go:600] kubectl: 1.28.3, cluster: 1.28.3 (minor skew: 0)
	I1024 19:57:47.532750   61295 out.go:177] * Done! kubectl is now configured to use "no-preload-301948" cluster and "default" namespace by default
	I1024 19:57:44.588467   61871 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gsn2q" in "kube-system" namespace has status "Ready":"False"
	I1024 19:57:47.089142   61871 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gsn2q" in "kube-system" namespace has status "Ready":"False"
	I1024 19:57:47.105885   61522 pod_ready.go:102] pod "metrics-server-74d5856cc6-4pkrg" in "kube-system" namespace has status "Ready":"False"
	I1024 19:57:49.606059   61522 pod_ready.go:102] pod "metrics-server-74d5856cc6-4pkrg" in "kube-system" namespace has status "Ready":"False"
	I1024 19:57:49.089241   61871 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gsn2q" in "kube-system" namespace has status "Ready":"False"
	I1024 19:57:51.587406   61871 pod_ready.go:102] pod "metrics-server-57f55c9bc5-gsn2q" in "kube-system" namespace has status "Ready":"False"
	I1024 19:57:52.783106   61871 pod_ready.go:81] duration metric: took 4m0.001012044s waiting for pod "metrics-server-57f55c9bc5-gsn2q" in "kube-system" namespace to be "Ready" ...
	E1024 19:57:52.783149   61871 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1024 19:57:52.783167   61871 pod_ready.go:38] duration metric: took 4m9.249614746s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1024 19:57:52.783192   61871 kubeadm.go:640] restartCluster took 4m28.517592662s
	W1024 19:57:52.783267   61871 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1024 19:57:52.783304   61871 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1024 19:57:52.105884   61522 pod_ready.go:102] pod "metrics-server-74d5856cc6-4pkrg" in "kube-system" namespace has status "Ready":"False"
	I1024 19:57:54.105927   61522 pod_ready.go:102] pod "metrics-server-74d5856cc6-4pkrg" in "kube-system" namespace has status "Ready":"False"
	I1024 19:57:56.105999   61522 pod_ready.go:102] pod "metrics-server-74d5856cc6-4pkrg" in "kube-system" namespace has status "Ready":"False"
	I1024 19:57:58.609849   61522 pod_ready.go:102] pod "metrics-server-74d5856cc6-4pkrg" in "kube-system" namespace has status "Ready":"False"
	I1024 19:58:01.106354   61522 pod_ready.go:102] pod "metrics-server-74d5856cc6-4pkrg" in "kube-system" namespace has status "Ready":"False"
	I1024 19:58:02.868283   61871 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (10.084951414s)
	I1024 19:58:02.868355   61871 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1024 19:58:02.883857   61871 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1024 19:58:02.893510   61871 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1024 19:58:02.902356   61871 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1024 19:58:02.902391   61871 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1024 19:58:02.958866   61871 kubeadm.go:322] [init] Using Kubernetes version: v1.28.3
	I1024 19:58:02.958945   61871 kubeadm.go:322] [preflight] Running pre-flight checks
	I1024 19:58:03.140213   61871 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1024 19:58:03.140353   61871 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1024 19:58:03.140509   61871 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1024 19:58:03.523001   61871 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1024 19:58:03.526056   61871 out.go:204]   - Generating certificates and keys ...
	I1024 19:58:03.526128   61871 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1024 19:58:03.526204   61871 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1024 19:58:03.526329   61871 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1024 19:58:03.526416   61871 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1024 19:58:03.526481   61871 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1024 19:58:03.526570   61871 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1024 19:58:03.526657   61871 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1024 19:58:03.526739   61871 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1024 19:58:03.527314   61871 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1024 19:58:03.528106   61871 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1024 19:58:03.528713   61871 kubeadm.go:322] [certs] Using the existing "sa" key
	I1024 19:58:03.528777   61871 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1024 19:58:03.663387   61871 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1024 19:58:03.797254   61871 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1024 19:58:04.016450   61871 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1024 19:58:04.243616   61871 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1024 19:58:04.244515   61871 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1024 19:58:04.247230   61871 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1024 19:58:03.605104   61522 pod_ready.go:102] pod "metrics-server-74d5856cc6-4pkrg" in "kube-system" namespace has status "Ready":"False"
	I1024 19:58:05.606857   61522 pod_ready.go:102] pod "metrics-server-74d5856cc6-4pkrg" in "kube-system" namespace has status "Ready":"False"
	I1024 19:58:04.249248   61871 out.go:204]   - Booting up control plane ...
	I1024 19:58:04.249377   61871 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1024 19:58:04.249476   61871 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1024 19:58:04.249565   61871 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1024 19:58:04.267518   61871 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1024 19:58:04.268769   61871 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1024 19:58:04.268844   61871 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1024 19:58:04.403648   61871 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1024 19:58:08.106019   61522 pod_ready.go:102] pod "metrics-server-74d5856cc6-4pkrg" in "kube-system" namespace has status "Ready":"False"
	I1024 19:58:10.606012   61522 pod_ready.go:102] pod "metrics-server-74d5856cc6-4pkrg" in "kube-system" namespace has status "Ready":"False"
	I1024 19:58:11.909523   61871 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.505777 seconds
	I1024 19:58:11.909719   61871 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1024 19:58:11.928397   61871 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1024 19:58:12.459047   61871 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1024 19:58:12.459295   61871 kubeadm.go:322] [mark-control-plane] Marking the node embed-certs-585475 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1024 19:58:12.982388   61871 kubeadm.go:322] [bootstrap-token] Using token: k1xje4.whn2yen97n6xzbfd
	I1024 19:58:12.983869   61871 out.go:204]   - Configuring RBAC rules ...
	I1024 19:58:12.984025   61871 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1024 19:58:12.996291   61871 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1024 19:58:13.006062   61871 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1024 19:58:13.011956   61871 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1024 19:58:13.019049   61871 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1024 19:58:13.022910   61871 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1024 19:58:13.041413   61871 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1024 19:58:13.289744   61871 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1024 19:58:13.401951   61871 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1024 19:58:13.404615   61871 kubeadm.go:322] 
	I1024 19:58:13.404691   61871 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1024 19:58:13.404706   61871 kubeadm.go:322] 
	I1024 19:58:13.404772   61871 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1024 19:58:13.404780   61871 kubeadm.go:322] 
	I1024 19:58:13.404807   61871 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1024 19:58:13.404877   61871 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1024 19:58:13.404934   61871 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1024 19:58:13.404942   61871 kubeadm.go:322] 
	I1024 19:58:13.405004   61871 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1024 19:58:13.405015   61871 kubeadm.go:322] 
	I1024 19:58:13.405083   61871 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1024 19:58:13.405093   61871 kubeadm.go:322] 
	I1024 19:58:13.405164   61871 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1024 19:58:13.405246   61871 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1024 19:58:13.405354   61871 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1024 19:58:13.405366   61871 kubeadm.go:322] 
	I1024 19:58:13.405474   61871 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1024 19:58:13.405588   61871 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1024 19:58:13.405599   61871 kubeadm.go:322] 
	I1024 19:58:13.405723   61871 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token k1xje4.whn2yen97n6xzbfd \
	I1024 19:58:13.405887   61871 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:2a4e6b3b2fbae5152c11b55fbfca3a5c4a76f76bef7b073915d1f37206892a8a \
	I1024 19:58:13.405930   61871 kubeadm.go:322] 	--control-plane 
	I1024 19:58:13.405940   61871 kubeadm.go:322] 
	I1024 19:58:13.406047   61871 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1024 19:58:13.406054   61871 kubeadm.go:322] 
	I1024 19:58:13.406173   61871 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token k1xje4.whn2yen97n6xzbfd \
	I1024 19:58:13.406319   61871 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:2a4e6b3b2fbae5152c11b55fbfca3a5c4a76f76bef7b073915d1f37206892a8a 
	I1024 19:58:13.408226   61871 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1024 19:58:13.408255   61871 cni.go:84] Creating CNI manager for ""
	I1024 19:58:13.408272   61871 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1024 19:58:13.410120   61871 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1024 19:58:13.411534   61871 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1024 19:58:13.423907   61871 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1024 19:58:13.466629   61871 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1024 19:58:13.466671   61871 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:58:13.466730   61871 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=88664ba50fde9b6a83229504adac261395c47fca minikube.k8s.io/name=embed-certs-585475 minikube.k8s.io/updated_at=2023_10_24T19_58_13_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:58:12.606084   61522 pod_ready.go:102] pod "metrics-server-74d5856cc6-4pkrg" in "kube-system" namespace has status "Ready":"False"
	I1024 19:58:14.606541   61522 pod_ready.go:102] pod "metrics-server-74d5856cc6-4pkrg" in "kube-system" namespace has status "Ready":"False"
	I1024 19:58:16.606773   61522 pod_ready.go:102] pod "metrics-server-74d5856cc6-4pkrg" in "kube-system" namespace has status "Ready":"False"
	I1024 19:58:14.030387   61871 ops.go:34] apiserver oom_adj: -16
	I1024 19:58:14.038216   61871 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:58:14.201584   61871 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:58:14.801114   61871 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:58:15.300657   61871 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:58:15.801311   61871 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:58:16.301295   61871 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:58:16.801518   61871 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:58:17.300874   61871 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:58:17.800954   61871 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:58:18.301599   61871 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:58:18.801412   61871 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:58:18.606890   61522 pod_ready.go:102] pod "metrics-server-74d5856cc6-4pkrg" in "kube-system" namespace has status "Ready":"False"
	I1024 19:58:20.299299   61522 pod_ready.go:81] duration metric: took 4m0.000613298s waiting for pod "metrics-server-74d5856cc6-4pkrg" in "kube-system" namespace to be "Ready" ...
	E1024 19:58:20.299340   61522 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1024 19:58:20.299360   61522 pod_ready.go:38] duration metric: took 4m1.200131205s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1024 19:58:20.299396   61522 kubeadm.go:640] restartCluster took 5m13.643377385s
	W1024 19:58:20.299448   61522 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1024 19:58:20.299473   61522 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I1024 19:58:23.150526   61522 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force": (2.851022289s)
	I1024 19:58:23.150596   61522 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1024 19:58:23.164284   61522 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1024 19:58:23.173108   61522 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1024 19:58:23.181680   61522 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1024 19:58:23.181720   61522 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I1024 19:58:23.244011   61522 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I1024 19:58:23.244121   61522 kubeadm.go:322] [preflight] Running pre-flight checks
	I1024 19:58:23.462482   61522 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1024 19:58:23.462651   61522 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1024 19:58:23.462786   61522 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1024 19:58:23.633979   61522 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1024 19:58:23.635385   61522 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1024 19:58:23.643631   61522 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I1024 19:58:23.755230   61522 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1024 19:58:19.301634   61871 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:58:19.800763   61871 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:58:20.301093   61871 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:58:20.801588   61871 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:58:21.301251   61871 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:58:21.801630   61871 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:58:22.301432   61871 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:58:22.800727   61871 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:58:23.301191   61871 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:58:23.801519   61871 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:58:23.758320   61522 out.go:204]   - Generating certificates and keys ...
	I1024 19:58:23.758421   61522 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1024 19:58:23.758528   61522 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1024 19:58:23.758633   61522 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1024 19:58:23.758726   61522 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1024 19:58:23.758834   61522 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1024 19:58:23.763579   61522 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1024 19:58:23.763641   61522 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1024 19:58:23.763716   61522 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1024 19:58:23.763800   61522 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1024 19:58:23.763886   61522 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1024 19:58:23.763941   61522 kubeadm.go:322] [certs] Using the existing "sa" key
	I1024 19:58:23.763991   61522 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1024 19:58:23.933658   61522 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1024 19:58:24.008144   61522 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1024 19:58:24.340159   61522 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1024 19:58:24.451013   61522 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1024 19:58:24.451949   61522 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1024 19:58:24.301674   61871 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:58:24.800747   61871 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:58:24.936062   61871 kubeadm.go:1081] duration metric: took 11.469438682s to wait for elevateKubeSystemPrivileges.
	I1024 19:58:24.936089   61871 kubeadm.go:406] StartCluster complete in 5m0.707592724s
	I1024 19:58:24.936106   61871 settings.go:142] acquiring lock: {Name:mk36c78ae5c888974883b83cd211b07900a5571c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 19:58:24.936196   61871 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17485-9104/kubeconfig
	I1024 19:58:24.937619   61871 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17485-9104/kubeconfig: {Name:mk3f1a292620d31d01e0540e90dfb98008d8ef4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 19:58:24.937866   61871 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1024 19:58:24.938003   61871 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1024 19:58:24.938112   61871 config.go:182] Loaded profile config "embed-certs-585475": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1024 19:58:24.938120   61871 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-585475"
	I1024 19:58:24.938144   61871 addons.go:231] Setting addon storage-provisioner=true in "embed-certs-585475"
	W1024 19:58:24.938156   61871 addons.go:240] addon storage-provisioner should already be in state true
	I1024 19:58:24.938188   61871 cache.go:107] acquiring lock: {Name:mke60744a4234b419c1d64d246b94fc561986c72 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1024 19:58:24.938222   61871 addons.go:69] Setting dashboard=true in profile "embed-certs-585475"
	I1024 19:58:24.938236   61871 addons.go:231] Setting addon dashboard=true in "embed-certs-585475"
	W1024 19:58:24.938242   61871 addons.go:240] addon dashboard should already be in state true
	I1024 19:58:24.938251   61871 cache.go:115] /home/jenkins/minikube-integration/17485-9104/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2 exists
	I1024 19:58:24.938260   61871 cache.go:96] cache image "gcr.io/k8s-minikube/gvisor-addon:2" -> "/home/jenkins/minikube-integration/17485-9104/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2" took 82.94µs
	I1024 19:58:24.938270   61871 cache.go:80] save to tar file gcr.io/k8s-minikube/gvisor-addon:2 -> /home/jenkins/minikube-integration/17485-9104/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2 succeeded
	I1024 19:58:24.938276   61871 addons.go:69] Setting default-storageclass=true in profile "embed-certs-585475"
	I1024 19:58:24.938270   61871 host.go:66] Checking if "embed-certs-585475" exists ...
	I1024 19:58:24.938285   61871 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-585475"
	I1024 19:58:24.938494   61871 addons.go:69] Setting metrics-server=true in profile "embed-certs-585475"
	I1024 19:58:24.938515   61871 addons.go:231] Setting addon metrics-server=true in "embed-certs-585475"
	W1024 19:58:24.938524   61871 addons.go:240] addon metrics-server should already be in state true
	I1024 19:58:24.938567   61871 host.go:66] Checking if "embed-certs-585475" exists ...
	I1024 19:58:24.938625   61871 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1024 19:58:24.938655   61871 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1024 19:58:24.938277   61871 cache.go:87] Successfully saved all images to host disk.
	I1024 19:58:24.938691   61871 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:58:24.938660   61871 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:58:24.938211   61871 host.go:66] Checking if "embed-certs-585475" exists ...
	I1024 19:58:24.938881   61871 config.go:182] Loaded profile config "embed-certs-585475": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1024 19:58:24.938928   61871 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1024 19:58:24.938960   61871 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:58:24.939093   61871 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1024 19:58:24.939124   61871 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:58:24.939211   61871 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1024 19:58:24.939241   61871 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:58:24.957322   61871 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41385
	I1024 19:58:24.957424   61871 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42359
	I1024 19:58:24.957777   61871 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:58:24.957810   61871 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:58:24.957886   61871 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38961
	I1024 19:58:24.958288   61871 main.go:141] libmachine: Using API Version  1
	I1024 19:58:24.958314   61871 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:58:24.958325   61871 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:58:24.958448   61871 main.go:141] libmachine: Using API Version  1
	I1024 19:58:24.958468   61871 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:58:24.958804   61871 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:58:24.958874   61871 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:58:24.958902   61871 main.go:141] libmachine: Using API Version  1
	I1024 19:58:24.958918   61871 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:58:24.959006   61871 main.go:141] libmachine: (embed-certs-585475) Calling .GetState
	I1024 19:58:24.959521   61871 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1024 19:58:24.959558   61871 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:58:24.959942   61871 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39023
	I1024 19:58:24.960040   61871 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:58:24.960661   61871 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1024 19:58:24.960707   61871 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:58:24.961087   61871 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:58:24.961546   61871 main.go:141] libmachine: Using API Version  1
	I1024 19:58:24.961568   61871 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:58:24.961953   61871 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:58:24.962207   61871 main.go:141] libmachine: (embed-certs-585475) Calling .GetState
	I1024 19:58:24.967464   61871 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1024 19:58:24.967505   61871 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:58:24.968998   61871 addons.go:231] Setting addon default-storageclass=true in "embed-certs-585475"
	W1024 19:58:24.969017   61871 addons.go:240] addon default-storageclass should already be in state true
	I1024 19:58:24.969042   61871 host.go:66] Checking if "embed-certs-585475" exists ...
	I1024 19:58:24.969408   61871 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1024 19:58:24.969435   61871 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:58:24.977471   61871 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34017
	I1024 19:58:24.978000   61871 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:58:24.978376   61871 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33807
	I1024 19:58:24.978534   61871 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39783
	I1024 19:58:24.978660   61871 main.go:141] libmachine: Using API Version  1
	I1024 19:58:24.978676   61871 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:58:24.978858   61871 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:58:24.978945   61871 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:58:24.979012   61871 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:58:24.979185   61871 main.go:141] libmachine: (embed-certs-585475) Calling .GetState
	I1024 19:58:24.979760   61871 main.go:141] libmachine: Using API Version  1
	I1024 19:58:24.979777   61871 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:58:24.979798   61871 main.go:141] libmachine: Using API Version  1
	I1024 19:58:24.979816   61871 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:58:24.980224   61871 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:58:24.980285   61871 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:58:24.980578   61871 main.go:141] libmachine: (embed-certs-585475) Calling .GetState
	I1024 19:58:24.980976   61871 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1024 19:58:24.981016   61871 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:58:24.981234   61871 main.go:141] libmachine: (embed-certs-585475) Calling .DriverName
	I1024 19:58:24.983366   61871 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1024 19:58:24.985296   61871 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1024 19:58:24.985314   61871 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1024 19:58:24.985333   61871 main.go:141] libmachine: (embed-certs-585475) Calling .GetSSHHostname
	I1024 19:58:24.983320   61871 main.go:141] libmachine: (embed-certs-585475) Calling .DriverName
	I1024 19:58:24.989756   61871 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1024 19:58:24.988478   61871 main.go:141] libmachine: (embed-certs-585475) DBG | domain embed-certs-585475 has defined MAC address 52:54:00:18:c8:37 in network mk-embed-certs-585475
	I1024 19:58:24.989262   61871 main.go:141] libmachine: (embed-certs-585475) Calling .GetSSHPort
	I1024 19:58:24.990195   61871 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44255
	I1024 19:58:24.991095   61871 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1024 19:58:24.991105   61871 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1024 19:58:24.991120   61871 main.go:141] libmachine: (embed-certs-585475) Calling .GetSSHHostname
	I1024 19:58:24.991161   61871 main.go:141] libmachine: (embed-certs-585475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:c8:37", ip: ""} in network mk-embed-certs-585475: {Iface:virbr1 ExpiryTime:2023-10-24 20:53:09 +0000 UTC Type:0 Mac:52:54:00:18:c8:37 Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:embed-certs-585475 Clientid:01:52:54:00:18:c8:37}
	I1024 19:58:24.991188   61871 main.go:141] libmachine: (embed-certs-585475) DBG | domain embed-certs-585475 has defined IP address 192.168.39.80 and MAC address 52:54:00:18:c8:37 in network mk-embed-certs-585475
	I1024 19:58:24.991362   61871 main.go:141] libmachine: (embed-certs-585475) Calling .GetSSHKeyPath
	I1024 19:58:24.991496   61871 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:58:24.991562   61871 main.go:141] libmachine: (embed-certs-585475) Calling .GetSSHUsername
	I1024 19:58:24.991741   61871 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9104/.minikube/machines/embed-certs-585475/id_rsa Username:docker}
	I1024 19:58:24.992036   61871 main.go:141] libmachine: Using API Version  1
	I1024 19:58:24.992054   61871 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:58:24.992376   61871 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:58:24.992977   61871 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1024 19:58:24.993016   61871 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:58:24.994092   61871 main.go:141] libmachine: (embed-certs-585475) DBG | domain embed-certs-585475 has defined MAC address 52:54:00:18:c8:37 in network mk-embed-certs-585475
	I1024 19:58:24.994593   61871 main.go:141] libmachine: (embed-certs-585475) Calling .GetSSHPort
	I1024 19:58:24.994623   61871 main.go:141] libmachine: (embed-certs-585475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:c8:37", ip: ""} in network mk-embed-certs-585475: {Iface:virbr1 ExpiryTime:2023-10-24 20:53:09 +0000 UTC Type:0 Mac:52:54:00:18:c8:37 Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:embed-certs-585475 Clientid:01:52:54:00:18:c8:37}
	I1024 19:58:24.994643   61871 main.go:141] libmachine: (embed-certs-585475) DBG | domain embed-certs-585475 has defined IP address 192.168.39.80 and MAC address 52:54:00:18:c8:37 in network mk-embed-certs-585475
	I1024 19:58:24.994766   61871 main.go:141] libmachine: (embed-certs-585475) Calling .GetSSHKeyPath
	I1024 19:58:24.994897   61871 main.go:141] libmachine: (embed-certs-585475) Calling .GetSSHUsername
	I1024 19:58:24.994992   61871 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9104/.minikube/machines/embed-certs-585475/id_rsa Username:docker}
	I1024 19:58:25.005714   61871 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39677
	I1024 19:58:25.006289   61871 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:58:25.006957   61871 main.go:141] libmachine: Using API Version  1
	I1024 19:58:25.006979   61871 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:58:25.007616   61871 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:58:25.007859   61871 main.go:141] libmachine: (embed-certs-585475) Calling .GetState
	I1024 19:58:25.009900   61871 main.go:141] libmachine: (embed-certs-585475) Calling .DriverName
	I1024 19:58:25.012209   61871 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I1024 19:58:25.011333   61871 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35367
	I1024 19:58:25.013912   61871 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1024 19:58:25.012644   61871 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:58:25.013377   61871 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33373
	I1024 19:58:25.015702   61871 addons.go:423] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1024 19:58:25.015717   61871 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1024 19:58:25.015742   61871 main.go:141] libmachine: (embed-certs-585475) Calling .GetSSHHostname
	I1024 19:58:25.016055   61871 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:58:25.016257   61871 main.go:141] libmachine: Using API Version  1
	I1024 19:58:25.016274   61871 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:58:25.016725   61871 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:58:25.017065   61871 main.go:141] libmachine: (embed-certs-585475) Calling .DriverName
	I1024 19:58:25.017351   61871 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1024 19:58:25.017372   61871 main.go:141] libmachine: (embed-certs-585475) Calling .GetSSHHostname
	I1024 19:58:25.018524   61871 main.go:141] libmachine: Using API Version  1
	I1024 19:58:25.018548   61871 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:58:25.018984   61871 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:58:25.019204   61871 main.go:141] libmachine: (embed-certs-585475) DBG | domain embed-certs-585475 has defined MAC address 52:54:00:18:c8:37 in network mk-embed-certs-585475
	I1024 19:58:25.019246   61871 main.go:141] libmachine: (embed-certs-585475) Calling .GetState
	I1024 19:58:25.020073   61871 main.go:141] libmachine: (embed-certs-585475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:c8:37", ip: ""} in network mk-embed-certs-585475: {Iface:virbr1 ExpiryTime:2023-10-24 20:53:09 +0000 UTC Type:0 Mac:52:54:00:18:c8:37 Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:embed-certs-585475 Clientid:01:52:54:00:18:c8:37}
	I1024 19:58:25.020305   61871 main.go:141] libmachine: (embed-certs-585475) DBG | domain embed-certs-585475 has defined IP address 192.168.39.80 and MAC address 52:54:00:18:c8:37 in network mk-embed-certs-585475
	I1024 19:58:25.020482   61871 main.go:141] libmachine: (embed-certs-585475) Calling .GetSSHPort
	I1024 19:58:25.020634   61871 main.go:141] libmachine: (embed-certs-585475) Calling .GetSSHKeyPath
	I1024 19:58:25.020777   61871 main.go:141] libmachine: (embed-certs-585475) Calling .GetSSHUsername
	I1024 19:58:25.020841   61871 main.go:141] libmachine: (embed-certs-585475) DBG | domain embed-certs-585475 has defined MAC address 52:54:00:18:c8:37 in network mk-embed-certs-585475
	I1024 19:58:25.020902   61871 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9104/.minikube/machines/embed-certs-585475/id_rsa Username:docker}
	I1024 19:58:25.021502   61871 main.go:141] libmachine: (embed-certs-585475) Calling .GetSSHPort
	I1024 19:58:25.021521   61871 main.go:141] libmachine: (embed-certs-585475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:c8:37", ip: ""} in network mk-embed-certs-585475: {Iface:virbr1 ExpiryTime:2023-10-24 20:53:09 +0000 UTC Type:0 Mac:52:54:00:18:c8:37 Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:embed-certs-585475 Clientid:01:52:54:00:18:c8:37}
	I1024 19:58:25.021544   61871 main.go:141] libmachine: (embed-certs-585475) DBG | domain embed-certs-585475 has defined IP address 192.168.39.80 and MAC address 52:54:00:18:c8:37 in network mk-embed-certs-585475
	I1024 19:58:25.021751   61871 main.go:141] libmachine: (embed-certs-585475) Calling .GetSSHKeyPath
	I1024 19:58:25.021921   61871 main.go:141] libmachine: (embed-certs-585475) Calling .GetSSHUsername
	I1024 19:58:25.022084   61871 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9104/.minikube/machines/embed-certs-585475/id_rsa Username:docker}
	I1024 19:58:25.022166   61871 main.go:141] libmachine: (embed-certs-585475) Calling .DriverName
	I1024 19:58:25.022402   61871 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1024 19:58:25.022416   61871 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1024 19:58:25.022431   61871 main.go:141] libmachine: (embed-certs-585475) Calling .GetSSHHostname
	I1024 19:58:25.025353   61871 main.go:141] libmachine: (embed-certs-585475) DBG | domain embed-certs-585475 has defined MAC address 52:54:00:18:c8:37 in network mk-embed-certs-585475
	I1024 19:58:25.025754   61871 main.go:141] libmachine: (embed-certs-585475) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:c8:37", ip: ""} in network mk-embed-certs-585475: {Iface:virbr1 ExpiryTime:2023-10-24 20:53:09 +0000 UTC Type:0 Mac:52:54:00:18:c8:37 Iaid: IPaddr:192.168.39.80 Prefix:24 Hostname:embed-certs-585475 Clientid:01:52:54:00:18:c8:37}
	I1024 19:58:25.025781   61871 main.go:141] libmachine: (embed-certs-585475) DBG | domain embed-certs-585475 has defined IP address 192.168.39.80 and MAC address 52:54:00:18:c8:37 in network mk-embed-certs-585475
	I1024 19:58:25.025950   61871 main.go:141] libmachine: (embed-certs-585475) Calling .GetSSHPort
	I1024 19:58:25.026144   61871 main.go:141] libmachine: (embed-certs-585475) Calling .GetSSHKeyPath
	I1024 19:58:25.026236   61871 main.go:141] libmachine: (embed-certs-585475) Calling .GetSSHUsername
	I1024 19:58:25.026329   61871 sshutil.go:53] new ssh client: &{IP:192.168.39.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9104/.minikube/machines/embed-certs-585475/id_rsa Username:docker}
	I1024 19:58:25.028197   61871 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-585475" context rescaled to 1 replicas
	I1024 19:58:25.028232   61871 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.80 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1024 19:58:25.030368   61871 out.go:177] * Verifying Kubernetes components...
	I1024 19:58:24.454085   61522 out.go:204]   - Booting up control plane ...
	I1024 19:58:24.454204   61522 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1024 19:58:24.461633   61522 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1024 19:58:24.466090   61522 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1024 19:58:24.466242   61522 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1024 19:58:24.473240   61522 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1024 19:58:25.031798   61871 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1024 19:58:25.303917   61871 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1024 19:58:25.313209   61871 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1024 19:58:25.325828   61871 node_ready.go:35] waiting up to 6m0s for node "embed-certs-585475" to be "Ready" ...
	I1024 19:58:25.326059   61871 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1024 19:58:25.344277   61871 node_ready.go:49] node "embed-certs-585475" has status "Ready":"True"
	I1024 19:58:25.344307   61871 node_ready.go:38] duration metric: took 18.445921ms waiting for node "embed-certs-585475" to be "Ready" ...
	I1024 19:58:25.344319   61871 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1024 19:58:25.351448   61871 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-585475" in "kube-system" namespace to be "Ready" ...
	I1024 19:58:25.366611   61871 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1024 19:58:25.366640   61871 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1024 19:58:25.371506   61871 pod_ready.go:92] pod "etcd-embed-certs-585475" in "kube-system" namespace has status "Ready":"True"
	I1024 19:58:25.371534   61871 pod_ready.go:81] duration metric: took 20.054467ms waiting for pod "etcd-embed-certs-585475" in "kube-system" namespace to be "Ready" ...
	I1024 19:58:25.371548   61871 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-585475" in "kube-system" namespace to be "Ready" ...
	I1024 19:58:25.391799   61871 docker.go:689] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/gvisor-addon:2
	registry.k8s.io/kube-apiserver:v1.28.3
	registry.k8s.io/kube-controller-manager:v1.28.3
	registry.k8s.io/kube-scheduler:v1.28.3
	registry.k8s.io/kube-proxy:v1.28.3
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I1024 19:58:25.391833   61871 cache_images.go:84] Images are preloaded, skipping loading
	I1024 19:58:25.391842   61871 cache_images.go:262] succeeded pushing to: embed-certs-585475
	I1024 19:58:25.391847   61871 cache_images.go:263] failed pushing to: 
	I1024 19:58:25.391869   61871 main.go:141] libmachine: Making call to close driver server
	I1024 19:58:25.391888   61871 main.go:141] libmachine: (embed-certs-585475) Calling .Close
	I1024 19:58:25.392177   61871 main.go:141] libmachine: Successfully made call to close driver server
	I1024 19:58:25.392194   61871 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 19:58:25.392204   61871 main.go:141] libmachine: Making call to close driver server
	I1024 19:58:25.392213   61871 main.go:141] libmachine: (embed-certs-585475) Calling .Close
	I1024 19:58:25.392642   61871 main.go:141] libmachine: (embed-certs-585475) DBG | Closing plugin on server side
	I1024 19:58:25.392674   61871 main.go:141] libmachine: Successfully made call to close driver server
	I1024 19:58:25.392702   61871 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 19:58:25.398677   61871 pod_ready.go:92] pod "kube-apiserver-embed-certs-585475" in "kube-system" namespace has status "Ready":"True"
	I1024 19:58:25.398707   61871 pod_ready.go:81] duration metric: took 27.14974ms waiting for pod "kube-apiserver-embed-certs-585475" in "kube-system" namespace to be "Ready" ...
	I1024 19:58:25.398722   61871 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-585475" in "kube-system" namespace to be "Ready" ...
	I1024 19:58:25.460805   61871 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1024 19:58:25.460833   61871 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1024 19:58:25.510045   61871 addons.go:423] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1024 19:58:25.510070   61871 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1024 19:58:25.581318   61871 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1024 19:58:25.581346   61871 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1024 19:58:25.583082   61871 addons.go:423] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1024 19:58:25.583103   61871 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1024 19:58:25.602372   61871 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1024 19:58:25.803187   61871 addons.go:423] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1024 19:58:25.803213   61871 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1024 19:58:25.952995   61871 pod_ready.go:92] pod "kube-controller-manager-embed-certs-585475" in "kube-system" namespace has status "Ready":"True"
	I1024 19:58:25.953022   61871 pod_ready.go:81] duration metric: took 554.290622ms waiting for pod "kube-controller-manager-embed-certs-585475" in "kube-system" namespace to be "Ready" ...
	I1024 19:58:25.953035   61871 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-585475" in "kube-system" namespace to be "Ready" ...
	I1024 19:58:26.025201   61871 addons.go:423] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1024 19:58:26.025225   61871 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1024 19:58:26.130570   61871 pod_ready.go:92] pod "kube-scheduler-embed-certs-585475" in "kube-system" namespace has status "Ready":"True"
	I1024 19:58:26.130597   61871 pod_ready.go:81] duration metric: took 177.553407ms waiting for pod "kube-scheduler-embed-certs-585475" in "kube-system" namespace to be "Ready" ...
	I1024 19:58:26.130608   61871 pod_ready.go:38] duration metric: took 786.276527ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1024 19:58:26.130631   61871 api_server.go:52] waiting for apiserver process to appear ...
	I1024 19:58:26.130685   61871 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 19:58:26.286387   61871 addons.go:423] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1024 19:58:26.286410   61871 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1024 19:58:26.447399   61871 addons.go:423] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1024 19:58:26.447426   61871 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1024 19:58:26.512024   61871 addons.go:423] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1024 19:58:26.512048   61871 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1024 19:58:26.578266   61871 addons.go:423] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1024 19:58:26.578294   61871 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1024 19:58:26.619827   61871 addons.go:423] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1024 19:58:26.619848   61871 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1024 19:58:26.661090   61871 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1024 19:58:28.001213   61871 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.697253074s)
	I1024 19:58:28.001258   61871 main.go:141] libmachine: Making call to close driver server
	I1024 19:58:28.001270   61871 main.go:141] libmachine: (embed-certs-585475) Calling .Close
	I1024 19:58:28.001614   61871 main.go:141] libmachine: Successfully made call to close driver server
	I1024 19:58:28.001647   61871 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 19:58:28.001659   61871 main.go:141] libmachine: Making call to close driver server
	I1024 19:58:28.001669   61871 main.go:141] libmachine: (embed-certs-585475) Calling .Close
	I1024 19:58:28.002039   61871 main.go:141] libmachine: Successfully made call to close driver server
	I1024 19:58:28.002058   61871 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 19:58:28.008526   61871 main.go:141] libmachine: Making call to close driver server
	I1024 19:58:28.008547   61871 main.go:141] libmachine: (embed-certs-585475) Calling .Close
	I1024 19:58:28.008891   61871 main.go:141] libmachine: Successfully made call to close driver server
	I1024 19:58:28.008908   61871 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 19:58:28.680148   61871 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.354054918s)
	I1024 19:58:28.680181   61871 start.go:926] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1024 19:58:28.680247   61871 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (3.077842576s)
	I1024 19:58:28.680286   61871 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.549582287s)
	I1024 19:58:28.680302   61871 main.go:141] libmachine: Making call to close driver server
	I1024 19:58:28.680342   61871 main.go:141] libmachine: (embed-certs-585475) Calling .Close
	I1024 19:58:28.680305   61871 api_server.go:72] duration metric: took 3.652047988s to wait for apiserver process to appear ...
	I1024 19:58:28.680417   61871 api_server.go:88] waiting for apiserver healthz status ...
	I1024 19:58:28.680434   61871 api_server.go:253] Checking apiserver healthz at https://192.168.39.80:8443/healthz ...
	I1024 19:58:28.680683   61871 main.go:141] libmachine: Successfully made call to close driver server
	I1024 19:58:28.680717   61871 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 19:58:28.680729   61871 main.go:141] libmachine: Making call to close driver server
	I1024 19:58:28.680791   61871 main.go:141] libmachine: (embed-certs-585475) Calling .Close
	I1024 19:58:28.681090   61871 main.go:141] libmachine: Successfully made call to close driver server
	I1024 19:58:28.681105   61871 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 19:58:28.681115   61871 addons.go:467] Verifying addon metrics-server=true in "embed-certs-585475"
	I1024 19:58:28.681126   61871 main.go:141] libmachine: (embed-certs-585475) DBG | Closing plugin on server side
	I1024 19:58:28.683177   61871 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.369928041s)
	I1024 19:58:28.683212   61871 main.go:141] libmachine: Making call to close driver server
	I1024 19:58:28.683223   61871 main.go:141] libmachine: (embed-certs-585475) Calling .Close
	I1024 19:58:28.683555   61871 main.go:141] libmachine: (embed-certs-585475) DBG | Closing plugin on server side
	I1024 19:58:28.683568   61871 main.go:141] libmachine: Successfully made call to close driver server
	I1024 19:58:28.683586   61871 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 19:58:28.683605   61871 main.go:141] libmachine: Making call to close driver server
	I1024 19:58:28.683618   61871 main.go:141] libmachine: (embed-certs-585475) Calling .Close
	I1024 19:58:28.683897   61871 main.go:141] libmachine: (embed-certs-585475) DBG | Closing plugin on server side
	I1024 19:58:28.683907   61871 main.go:141] libmachine: Successfully made call to close driver server
	I1024 19:58:28.683922   61871 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 19:58:28.687471   61871 api_server.go:279] https://192.168.39.80:8443/healthz returned 200:
	ok
	I1024 19:58:28.691710   61871 api_server.go:141] control plane version: v1.28.3
	I1024 19:58:28.691734   61871 api_server.go:131] duration metric: took 11.309067ms to wait for apiserver health ...
	I1024 19:58:28.691744   61871 system_pods.go:43] waiting for kube-system pods to appear ...
	I1024 19:58:28.700022   61871 system_pods.go:59] 8 kube-system pods found
	I1024 19:58:28.700051   61871 system_pods.go:61] "coredns-5dd5756b68-drwc5" [47b69a8b-ab83-408c-866c-0d5a3de38f76] Running
	I1024 19:58:28.700059   61871 system_pods.go:61] "etcd-embed-certs-585475" [9bde23ea-fbca-4d4e-9054-7137affa6213] Running
	I1024 19:58:28.700066   61871 system_pods.go:61] "kube-apiserver-embed-certs-585475" [588cf6e8-64f0-40b7-a9b7-37fdd54917e0] Running
	I1024 19:58:28.700073   61871 system_pods.go:61] "kube-controller-manager-embed-certs-585475" [cb85df42-a95c-4eff-9775-dd676d6b6a31] Running
	I1024 19:58:28.700079   61871 system_pods.go:61] "kube-proxy-7g5nl" [4e358ba6-4087-40d7-b210-b8a631864770] Running
	I1024 19:58:28.700090   61871 system_pods.go:61] "kube-scheduler-embed-certs-585475" [c257749f-2752-4016-9fd4-c381f52c0447] Running
	I1024 19:58:28.700100   61871 system_pods.go:61] "metrics-server-57f55c9bc5-ms56h" [6c60494e-697d-4e5c-b827-bbb595914e63] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 19:58:28.700118   61871 system_pods.go:61] "storage-provisioner" [4d5430b9-bf82-40f3-a8ac-05ea6a5a9f9c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1024 19:58:28.700130   61871 system_pods.go:74] duration metric: took 8.379172ms to wait for pod list to return data ...
	I1024 19:58:28.700144   61871 default_sa.go:34] waiting for default service account to be created ...
	I1024 19:58:28.703088   61871 default_sa.go:45] found service account: "default"
	I1024 19:58:28.703108   61871 default_sa.go:55] duration metric: took 2.954252ms for default service account to be created ...
	I1024 19:58:28.703118   61871 system_pods.go:116] waiting for k8s-apps to be running ...
	I1024 19:58:28.709973   61871 system_pods.go:86] 8 kube-system pods found
	I1024 19:58:28.709998   61871 system_pods.go:89] "coredns-5dd5756b68-drwc5" [47b69a8b-ab83-408c-866c-0d5a3de38f76] Running
	I1024 19:58:28.710006   61871 system_pods.go:89] "etcd-embed-certs-585475" [9bde23ea-fbca-4d4e-9054-7137affa6213] Running
	I1024 19:58:28.710014   61871 system_pods.go:89] "kube-apiserver-embed-certs-585475" [588cf6e8-64f0-40b7-a9b7-37fdd54917e0] Running
	I1024 19:58:28.710100   61871 system_pods.go:89] "kube-controller-manager-embed-certs-585475" [cb85df42-a95c-4eff-9775-dd676d6b6a31] Running
	I1024 19:58:28.710115   61871 system_pods.go:89] "kube-proxy-7g5nl" [4e358ba6-4087-40d7-b210-b8a631864770] Running
	I1024 19:58:28.710122   61871 system_pods.go:89] "kube-scheduler-embed-certs-585475" [c257749f-2752-4016-9fd4-c381f52c0447] Running
	I1024 19:58:28.710136   61871 system_pods.go:89] "metrics-server-57f55c9bc5-ms56h" [6c60494e-697d-4e5c-b827-bbb595914e63] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 19:58:28.710149   61871 system_pods.go:89] "storage-provisioner" [4d5430b9-bf82-40f3-a8ac-05ea6a5a9f9c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1024 19:58:28.710162   61871 system_pods.go:126] duration metric: took 7.038116ms to wait for k8s-apps to be running ...
	I1024 19:58:28.710173   61871 system_svc.go:44] waiting for kubelet service to be running ....
	I1024 19:58:28.710225   61871 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1024 19:58:29.737629   61871 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.027377611s)
	I1024 19:58:29.737662   61871 system_svc.go:56] duration metric: took 1.027485665s WaitForService to wait for kubelet.
	I1024 19:58:29.737671   61871 kubeadm.go:581] duration metric: took 4.709413506s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1024 19:58:29.737696   61871 node_conditions.go:102] verifying NodePressure condition ...
	I1024 19:58:29.737866   61871 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (3.076724302s)
	I1024 19:58:29.737915   61871 main.go:141] libmachine: Making call to close driver server
	I1024 19:58:29.737932   61871 main.go:141] libmachine: (embed-certs-585475) Calling .Close
	I1024 19:58:29.738226   61871 main.go:141] libmachine: Successfully made call to close driver server
	I1024 19:58:29.738246   61871 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 19:58:29.738257   61871 main.go:141] libmachine: Making call to close driver server
	I1024 19:58:29.738266   61871 main.go:141] libmachine: (embed-certs-585475) Calling .Close
	I1024 19:58:29.738658   61871 main.go:141] libmachine: (embed-certs-585475) DBG | Closing plugin on server side
	I1024 19:58:29.738668   61871 main.go:141] libmachine: Successfully made call to close driver server
	I1024 19:58:29.738682   61871 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 19:58:29.740198   61871 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-585475 addons enable metrics-server	
	
	
	I1024 19:58:29.741679   61871 out.go:177] * Enabled addons: default-storageclass, metrics-server, storage-provisioner, dashboard
	I1024 19:58:29.742983   61871 addons.go:502] enable addons completed in 4.804988274s: enabled=[default-storageclass metrics-server storage-provisioner dashboard]
	I1024 19:58:29.751148   61871 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1024 19:58:29.751174   61871 node_conditions.go:123] node cpu capacity is 2
	I1024 19:58:29.751185   61871 node_conditions.go:105] duration metric: took 13.483383ms to run NodePressure ...
	I1024 19:58:29.751198   61871 start.go:228] waiting for startup goroutines ...
	I1024 19:58:29.751207   61871 start.go:233] waiting for cluster config update ...
	I1024 19:58:29.751219   61871 start.go:242] writing updated cluster config ...
	I1024 19:58:29.751498   61871 ssh_runner.go:195] Run: rm -f paused
	I1024 19:58:29.825248   61871 start.go:600] kubectl: 1.28.3, cluster: 1.28.3 (minor skew: 0)
	I1024 19:58:29.826864   61871 out.go:177] * Done! kubectl is now configured to use "embed-certs-585475" cluster and "default" namespace by default
	I1024 19:58:33.977021   61522 kubeadm.go:322] [apiclient] All control plane components are healthy after 9.503424 seconds
	I1024 19:58:33.977184   61522 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1024 19:58:34.018505   61522 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I1024 19:58:34.766866   61522 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1024 19:58:34.767035   61522 kubeadm.go:322] [mark-control-plane] Marking the node old-k8s-version-531596 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I1024 19:58:35.279088   61522 kubeadm.go:322] [bootstrap-token] Using token: ykf1lj.ndz1kgnbsh88h0tx
	I1024 19:58:35.280743   61522 out.go:204]   - Configuring RBAC rules ...
	I1024 19:58:35.280861   61522 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1024 19:58:35.293581   61522 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1024 19:58:35.298014   61522 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1024 19:58:35.302556   61522 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1024 19:58:35.307611   61522 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1024 19:58:35.387921   61522 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1024 19:58:35.725902   61522 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1024 19:58:35.727409   61522 kubeadm.go:322] 
	I1024 19:58:35.727500   61522 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1024 19:58:35.727519   61522 kubeadm.go:322] 
	I1024 19:58:35.727621   61522 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1024 19:58:35.727636   61522 kubeadm.go:322] 
	I1024 19:58:35.727667   61522 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1024 19:58:35.727738   61522 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1024 19:58:35.727836   61522 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1024 19:58:35.727854   61522 kubeadm.go:322] 
	I1024 19:58:35.727913   61522 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1024 19:58:35.728024   61522 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1024 19:58:35.728126   61522 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1024 19:58:35.728137   61522 kubeadm.go:322] 
	I1024 19:58:35.728277   61522 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities 
	I1024 19:58:35.728385   61522 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1024 19:58:35.728396   61522 kubeadm.go:322] 
	I1024 19:58:35.728517   61522 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token ykf1lj.ndz1kgnbsh88h0tx \
	I1024 19:58:35.728692   61522 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:2a4e6b3b2fbae5152c11b55fbfca3a5c4a76f76bef7b073915d1f37206892a8a \
	I1024 19:58:35.728733   61522 kubeadm.go:322]     --control-plane 	  
	I1024 19:58:35.728744   61522 kubeadm.go:322] 
	I1024 19:58:35.728862   61522 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1024 19:58:35.728873   61522 kubeadm.go:322] 
	I1024 19:58:35.728993   61522 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token ykf1lj.ndz1kgnbsh88h0tx \
	I1024 19:58:35.729154   61522 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:2a4e6b3b2fbae5152c11b55fbfca3a5c4a76f76bef7b073915d1f37206892a8a 
	I1024 19:58:35.730193   61522 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I1024 19:58:35.730325   61522 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.6. Latest validated version: 18.09
	I1024 19:58:35.730457   61522 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1024 19:58:35.730491   61522 cni.go:84] Creating CNI manager for ""
	I1024 19:58:35.730510   61522 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1024 19:58:35.730544   61522 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1024 19:58:35.730638   61522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:58:35.730645   61522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=88664ba50fde9b6a83229504adac261395c47fca minikube.k8s.io/name=old-k8s-version-531596 minikube.k8s.io/updated_at=2023_10_24T19_58_35_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:58:35.755513   61522 ops.go:34] apiserver oom_adj: -16
	I1024 19:58:35.982446   61522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:58:36.078079   61522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:58:36.692587   61522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:58:37.192910   61522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:58:37.692100   61522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:58:38.192177   61522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:58:38.692057   61522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:58:39.192201   61522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:58:39.692858   61522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:58:40.192922   61522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:58:40.692183   61522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:58:41.192031   61522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:58:41.692796   61522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:58:42.192060   61522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:58:42.693044   61522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:58:43.193007   61522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:58:43.692778   61522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:58:44.192659   61522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:58:44.692146   61522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:58:45.192592   61522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:58:45.692933   61522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:58:46.192151   61522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:58:46.692178   61522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:58:47.192290   61522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:58:47.692833   61522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:58:48.192085   61522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:58:48.692841   61522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:58:49.192857   61522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:58:49.692841   61522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1024 19:58:49.851143   61522 kubeadm.go:1081] duration metric: took 14.120567986s to wait for elevateKubeSystemPrivileges.
	I1024 19:58:49.851180   61522 kubeadm.go:406] StartCluster complete in 5m43.2328308s
	I1024 19:58:49.851203   61522 settings.go:142] acquiring lock: {Name:mk36c78ae5c888974883b83cd211b07900a5571c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 19:58:49.851297   61522 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17485-9104/kubeconfig
	I1024 19:58:49.852782   61522 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17485-9104/kubeconfig: {Name:mk3f1a292620d31d01e0540e90dfb98008d8ef4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1024 19:58:49.853100   61522 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1024 19:58:49.853213   61522 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1024 19:58:49.853276   61522 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-531596"
	I1024 19:58:49.853289   61522 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-531596"
	I1024 19:58:49.853298   61522 addons.go:231] Setting addon storage-provisioner=true in "old-k8s-version-531596"
	W1024 19:58:49.853306   61522 addons.go:240] addon storage-provisioner should already be in state true
	I1024 19:58:49.853306   61522 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-531596"
	I1024 19:58:49.853308   61522 addons.go:69] Setting dashboard=true in profile "old-k8s-version-531596"
	I1024 19:58:49.853328   61522 addons.go:231] Setting addon dashboard=true in "old-k8s-version-531596"
	I1024 19:58:49.853336   61522 config.go:182] Loaded profile config "old-k8s-version-531596": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	W1024 19:58:49.853339   61522 addons.go:240] addon dashboard should already be in state true
	I1024 19:58:49.853361   61522 host.go:66] Checking if "old-k8s-version-531596" exists ...
	I1024 19:58:49.853383   61522 host.go:66] Checking if "old-k8s-version-531596" exists ...
	I1024 19:58:49.853393   61522 cache.go:107] acquiring lock: {Name:mke60744a4234b419c1d64d246b94fc561986c72 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1024 19:58:49.853454   61522 cache.go:115] /home/jenkins/minikube-integration/17485-9104/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2 exists
	I1024 19:58:49.853464   61522 cache.go:96] cache image "gcr.io/k8s-minikube/gvisor-addon:2" -> "/home/jenkins/minikube-integration/17485-9104/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2" took 76.303µs
	I1024 19:58:49.853474   61522 cache.go:80] save to tar file gcr.io/k8s-minikube/gvisor-addon:2 -> /home/jenkins/minikube-integration/17485-9104/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2 succeeded
	I1024 19:58:49.853480   61522 cache.go:87] Successfully saved all images to host disk.
	I1024 19:58:49.853648   61522 config.go:182] Loaded profile config "old-k8s-version-531596": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I1024 19:58:49.853755   61522 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1024 19:58:49.853758   61522 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1024 19:58:49.853768   61522 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1024 19:58:49.853776   61522 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:58:49.853777   61522 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:58:49.853787   61522 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:58:49.853836   61522 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-531596"
	I1024 19:58:49.853848   61522 addons.go:231] Setting addon metrics-server=true in "old-k8s-version-531596"
	W1024 19:58:49.853855   61522 addons.go:240] addon metrics-server should already be in state true
	I1024 19:58:49.853888   61522 host.go:66] Checking if "old-k8s-version-531596" exists ...
	I1024 19:58:49.853952   61522 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1024 19:58:49.853968   61522 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:58:49.854215   61522 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1024 19:58:49.854244   61522 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:58:49.870672   61522 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38387
	I1024 19:58:49.871287   61522 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:58:49.871824   61522 main.go:141] libmachine: Using API Version  1
	I1024 19:58:49.871841   61522 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:58:49.872177   61522 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:58:49.872617   61522 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1024 19:58:49.872637   61522 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:58:49.872846   61522 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36517
	I1024 19:58:49.872853   61522 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35679
	I1024 19:58:49.872887   61522 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36643
	I1024 19:58:49.873185   61522 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:58:49.873387   61522 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:58:49.873743   61522 main.go:141] libmachine: Using API Version  1
	I1024 19:58:49.873768   61522 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:58:49.873795   61522 main.go:141] libmachine: Using API Version  1
	I1024 19:58:49.873809   61522 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:58:49.874132   61522 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:58:49.874190   61522 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:58:49.874305   61522 main.go:141] libmachine: (old-k8s-version-531596) Calling .GetState
	I1024 19:58:49.874362   61522 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:58:49.874451   61522 main.go:141] libmachine: (old-k8s-version-531596) Calling .GetState
	I1024 19:58:49.875034   61522 main.go:141] libmachine: Using API Version  1
	I1024 19:58:49.875055   61522 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:58:49.875482   61522 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:58:49.876074   61522 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1024 19:58:49.876115   61522 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:58:49.877394   61522 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46761
	I1024 19:58:49.877647   61522 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1024 19:58:49.877665   61522 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:58:49.878881   61522 addons.go:231] Setting addon default-storageclass=true in "old-k8s-version-531596"
	W1024 19:58:49.878895   61522 addons.go:240] addon default-storageclass should already be in state true
	I1024 19:58:49.878922   61522 host.go:66] Checking if "old-k8s-version-531596" exists ...
	I1024 19:58:49.879294   61522 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1024 19:58:49.879337   61522 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:58:49.879819   61522 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:58:49.880350   61522 main.go:141] libmachine: Using API Version  1
	I1024 19:58:49.880377   61522 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:58:49.880753   61522 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:58:49.881354   61522 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1024 19:58:49.881400   61522 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:58:49.892877   61522 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45229
	I1024 19:58:49.893365   61522 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:58:49.893812   61522 main.go:141] libmachine: Using API Version  1
	I1024 19:58:49.893843   61522 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:58:49.894253   61522 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:58:49.894654   61522 main.go:141] libmachine: (old-k8s-version-531596) Calling .GetState
	I1024 19:58:49.896017   61522 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36235
	I1024 19:58:49.896063   61522 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39407
	I1024 19:58:49.896592   61522 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:58:49.896597   61522 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:58:49.896597   61522 main.go:141] libmachine: (old-k8s-version-531596) Calling .DriverName
	I1024 19:58:49.898531   61522 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1024 19:58:49.897017   61522 main.go:141] libmachine: Using API Version  1
	I1024 19:58:49.897139   61522 main.go:141] libmachine: Using API Version  1
	I1024 19:58:49.899068   61522 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37659
	I1024 19:58:49.899073   61522 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34011
	I1024 19:58:49.900331   61522 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:58:49.900348   61522 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:58:49.901792   61522 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I1024 19:58:49.900771   61522 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:58:49.900789   61522 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:58:49.900814   61522 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:58:49.901048   61522 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:58:49.903163   61522 addons.go:423] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1024 19:58:49.903182   61522 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1024 19:58:49.903201   61522 main.go:141] libmachine: (old-k8s-version-531596) Calling .GetSSHHostname
	I1024 19:58:49.903285   61522 main.go:141] libmachine: (old-k8s-version-531596) Calling .GetState
	I1024 19:58:49.903303   61522 main.go:141] libmachine: (old-k8s-version-531596) Calling .DriverName
	I1024 19:58:49.903481   61522 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1024 19:58:49.903502   61522 main.go:141] libmachine: (old-k8s-version-531596) Calling .GetSSHHostname
	I1024 19:58:49.903516   61522 main.go:141] libmachine: Using API Version  1
	I1024 19:58:49.903531   61522 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:58:49.903632   61522 main.go:141] libmachine: Using API Version  1
	I1024 19:58:49.903644   61522 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:58:49.903961   61522 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:58:49.904196   61522 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:58:49.904647   61522 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1024 19:58:49.904689   61522 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:58:49.904723   61522 main.go:141] libmachine: (old-k8s-version-531596) Calling .GetState
	I1024 19:58:49.906110   61522 main.go:141] libmachine: (old-k8s-version-531596) Calling .DriverName
	I1024 19:58:49.908775   61522 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1024 19:58:49.906930   61522 main.go:141] libmachine: (old-k8s-version-531596) Calling .DriverName
	I1024 19:58:49.907725   61522 main.go:141] libmachine: (old-k8s-version-531596) DBG | domain old-k8s-version-531596 has defined MAC address 52:54:00:03:22:5a in network mk-old-k8s-version-531596
	I1024 19:58:49.910076   61522 main.go:141] libmachine: (old-k8s-version-531596) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:22:5a", ip: ""} in network mk-old-k8s-version-531596: {Iface:virbr4 ExpiryTime:2023-10-24 20:49:50 +0000 UTC Type:0 Mac:52:54:00:03:22:5a Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:old-k8s-version-531596 Clientid:01:52:54:00:03:22:5a}
	I1024 19:58:49.910088   61522 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1024 19:58:49.910106   61522 main.go:141] libmachine: (old-k8s-version-531596) DBG | domain old-k8s-version-531596 has defined IP address 192.168.72.163 and MAC address 52:54:00:03:22:5a in network mk-old-k8s-version-531596
	I1024 19:58:49.908639   61522 main.go:141] libmachine: (old-k8s-version-531596) Calling .GetSSHPort
	I1024 19:58:49.910129   61522 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1024 19:58:49.908677   61522 main.go:141] libmachine: (old-k8s-version-531596) DBG | domain old-k8s-version-531596 has defined MAC address 52:54:00:03:22:5a in network mk-old-k8s-version-531596
	I1024 19:58:49.910164   61522 main.go:141] libmachine: (old-k8s-version-531596) Calling .GetSSHHostname
	I1024 19:58:49.910171   61522 main.go:141] libmachine: (old-k8s-version-531596) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:22:5a", ip: ""} in network mk-old-k8s-version-531596: {Iface:virbr4 ExpiryTime:2023-10-24 20:49:50 +0000 UTC Type:0 Mac:52:54:00:03:22:5a Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:old-k8s-version-531596 Clientid:01:52:54:00:03:22:5a}
	I1024 19:58:49.910191   61522 main.go:141] libmachine: (old-k8s-version-531596) DBG | domain old-k8s-version-531596 has defined IP address 192.168.72.163 and MAC address 52:54:00:03:22:5a in network mk-old-k8s-version-531596
	I1024 19:58:49.911617   61522 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1024 19:58:49.909331   61522 main.go:141] libmachine: (old-k8s-version-531596) Calling .GetSSHPort
	I1024 19:58:49.910320   61522 main.go:141] libmachine: (old-k8s-version-531596) Calling .GetSSHKeyPath
	I1024 19:58:49.913221   61522 main.go:141] libmachine: (old-k8s-version-531596) DBG | domain old-k8s-version-531596 has defined MAC address 52:54:00:03:22:5a in network mk-old-k8s-version-531596
	I1024 19:58:49.913339   61522 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1024 19:58:49.913354   61522 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1024 19:58:49.913365   61522 main.go:141] libmachine: (old-k8s-version-531596) Calling .GetSSHKeyPath
	I1024 19:58:49.913369   61522 main.go:141] libmachine: (old-k8s-version-531596) Calling .GetSSHHostname
	I1024 19:58:49.913388   61522 main.go:141] libmachine: (old-k8s-version-531596) Calling .GetSSHUsername
	I1024 19:58:49.913507   61522 main.go:141] libmachine: (old-k8s-version-531596) Calling .GetSSHUsername
	I1024 19:58:49.913524   61522 sshutil.go:53] new ssh client: &{IP:192.168.72.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9104/.minikube/machines/old-k8s-version-531596/id_rsa Username:docker}
	I1024 19:58:49.913755   61522 sshutil.go:53] new ssh client: &{IP:192.168.72.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9104/.minikube/machines/old-k8s-version-531596/id_rsa Username:docker}
	I1024 19:58:49.913805   61522 main.go:141] libmachine: (old-k8s-version-531596) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:22:5a", ip: ""} in network mk-old-k8s-version-531596: {Iface:virbr4 ExpiryTime:2023-10-24 20:49:50 +0000 UTC Type:0 Mac:52:54:00:03:22:5a Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:old-k8s-version-531596 Clientid:01:52:54:00:03:22:5a}
	I1024 19:58:49.913830   61522 main.go:141] libmachine: (old-k8s-version-531596) DBG | domain old-k8s-version-531596 has defined IP address 192.168.72.163 and MAC address 52:54:00:03:22:5a in network mk-old-k8s-version-531596
	I1024 19:58:49.913954   61522 main.go:141] libmachine: (old-k8s-version-531596) Calling .GetSSHPort
	I1024 19:58:49.914153   61522 main.go:141] libmachine: (old-k8s-version-531596) Calling .GetSSHKeyPath
	I1024 19:58:49.914279   61522 main.go:141] libmachine: (old-k8s-version-531596) Calling .GetSSHUsername
	I1024 19:58:49.914417   61522 sshutil.go:53] new ssh client: &{IP:192.168.72.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9104/.minikube/machines/old-k8s-version-531596/id_rsa Username:docker}
	I1024 19:58:49.916217   61522 main.go:141] libmachine: (old-k8s-version-531596) DBG | domain old-k8s-version-531596 has defined MAC address 52:54:00:03:22:5a in network mk-old-k8s-version-531596
	I1024 19:58:49.916626   61522 main.go:141] libmachine: (old-k8s-version-531596) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:22:5a", ip: ""} in network mk-old-k8s-version-531596: {Iface:virbr4 ExpiryTime:2023-10-24 20:49:50 +0000 UTC Type:0 Mac:52:54:00:03:22:5a Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:old-k8s-version-531596 Clientid:01:52:54:00:03:22:5a}
	I1024 19:58:49.916806   61522 main.go:141] libmachine: (old-k8s-version-531596) Calling .GetSSHPort
	I1024 19:58:49.916837   61522 main.go:141] libmachine: (old-k8s-version-531596) DBG | domain old-k8s-version-531596 has defined IP address 192.168.72.163 and MAC address 52:54:00:03:22:5a in network mk-old-k8s-version-531596
	I1024 19:58:49.916968   61522 main.go:141] libmachine: (old-k8s-version-531596) Calling .GetSSHKeyPath
	I1024 19:58:49.917136   61522 main.go:141] libmachine: (old-k8s-version-531596) Calling .GetSSHUsername
	I1024 19:58:49.917246   61522 sshutil.go:53] new ssh client: &{IP:192.168.72.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9104/.minikube/machines/old-k8s-version-531596/id_rsa Username:docker}
	I1024 19:58:49.922343   61522 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-531596" context rescaled to 1 replicas
	I1024 19:58:49.922381   61522 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.163 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1024 19:58:49.924014   61522 out.go:177] * Verifying Kubernetes components...
	I1024 19:58:49.925423   61522 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1024 19:58:49.950991   61522 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36243
	I1024 19:58:49.951406   61522 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:58:49.951971   61522 main.go:141] libmachine: Using API Version  1
	I1024 19:58:49.951997   61522 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:58:49.952379   61522 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:58:49.952540   61522 main.go:141] libmachine: (old-k8s-version-531596) Calling .GetState
	I1024 19:58:49.954956   61522 main.go:141] libmachine: (old-k8s-version-531596) Calling .DriverName
	I1024 19:58:49.957061   61522 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1024 19:58:49.957079   61522 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1024 19:58:49.957098   61522 main.go:141] libmachine: (old-k8s-version-531596) Calling .GetSSHHostname
	I1024 19:58:49.960405   61522 main.go:141] libmachine: (old-k8s-version-531596) DBG | domain old-k8s-version-531596 has defined MAC address 52:54:00:03:22:5a in network mk-old-k8s-version-531596
	I1024 19:58:49.960803   61522 main.go:141] libmachine: (old-k8s-version-531596) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:22:5a", ip: ""} in network mk-old-k8s-version-531596: {Iface:virbr4 ExpiryTime:2023-10-24 20:49:50 +0000 UTC Type:0 Mac:52:54:00:03:22:5a Iaid: IPaddr:192.168.72.163 Prefix:24 Hostname:old-k8s-version-531596 Clientid:01:52:54:00:03:22:5a}
	I1024 19:58:49.960820   61522 main.go:141] libmachine: (old-k8s-version-531596) DBG | domain old-k8s-version-531596 has defined IP address 192.168.72.163 and MAC address 52:54:00:03:22:5a in network mk-old-k8s-version-531596
	I1024 19:58:49.960978   61522 main.go:141] libmachine: (old-k8s-version-531596) Calling .GetSSHPort
	I1024 19:58:49.961157   61522 main.go:141] libmachine: (old-k8s-version-531596) Calling .GetSSHKeyPath
	I1024 19:58:49.961305   61522 main.go:141] libmachine: (old-k8s-version-531596) Calling .GetSSHUsername
	I1024 19:58:49.961456   61522 sshutil.go:53] new ssh client: &{IP:192.168.72.163 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9104/.minikube/machines/old-k8s-version-531596/id_rsa Username:docker}
	I1024 19:58:50.078640   61522 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-531596" to be "Ready" ...
	I1024 19:58:50.078691   61522 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1024 19:58:50.081463   61522 node_ready.go:49] node "old-k8s-version-531596" has status "Ready":"True"
	I1024 19:58:50.081481   61522 node_ready.go:38] duration metric: took 2.810469ms waiting for node "old-k8s-version-531596" to be "Ready" ...
	I1024 19:58:50.081492   61522 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1024 19:58:50.084253   61522 pod_ready.go:38] duration metric: took 2.747242ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1024 19:58:50.084279   61522 api_server.go:52] waiting for apiserver process to appear ...
	I1024 19:58:50.084318   61522 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 19:58:50.108459   61522 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1024 19:58:50.231326   61522 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1024 19:58:50.232182   61522 addons.go:423] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1024 19:58:50.232198   61522 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1024 19:58:50.336611   61522 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1024 19:58:50.336640   61522 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1024 19:58:50.373307   61522 addons.go:423] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1024 19:58:50.373331   61522 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1024 19:58:50.439892   61522 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1024 19:58:50.439931   61522 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1024 19:58:50.440391   61522 addons.go:423] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1024 19:58:50.440411   61522 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1024 19:58:50.534005   61522 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1024 19:58:50.534038   61522 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1024 19:58:50.569372   61522 addons.go:423] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1024 19:58:50.569392   61522 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1024 19:58:50.663157   61522 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1024 19:58:50.710453   61522 addons.go:423] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1024 19:58:50.710479   61522 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1024 19:58:50.926935   61522 addons.go:423] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1024 19:58:50.926955   61522 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1024 19:58:50.977281   61522 addons.go:423] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1024 19:58:50.977314   61522 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1024 19:58:51.049125   61522 addons.go:423] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1024 19:58:51.049154   61522 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1024 19:58:51.201585   61522 addons.go:423] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1024 19:58:51.201612   61522 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1024 19:58:51.300852   61522 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1024 19:58:51.541389   61522 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.46266672s)
	I1024 19:58:51.541423   61522 start.go:926] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I1024 19:58:51.541479   61522 ssh_runner.go:235] Completed: docker images --format {{.Repository}}:{{.Tag}}: (1.63797939s)
	I1024 19:58:51.541512   61522 docker.go:689] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/gvisor-addon:2
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	k8s.gcr.io/pause:3.1
	registry.k8s.io/pause:3.1
	
	-- /stdout --
	I1024 19:58:51.541528   61522 cache_images.go:84] Images are preloaded, skipping loading
	I1024 19:58:51.541532   61522 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.457199821s)
	I1024 19:58:51.541537   61522 cache_images.go:262] succeeded pushing to: old-k8s-version-531596
	I1024 19:58:51.541543   61522 cache_images.go:263] failed pushing to: 
	I1024 19:58:51.541549   61522 api_server.go:72] duration metric: took 1.619141064s to wait for apiserver process to appear ...
	I1024 19:58:51.541559   61522 api_server.go:88] waiting for apiserver healthz status ...
	I1024 19:58:51.541571   61522 main.go:141] libmachine: Making call to close driver server
	I1024 19:58:51.541575   61522 api_server.go:253] Checking apiserver healthz at https://192.168.72.163:8443/healthz ...
	I1024 19:58:51.541586   61522 main.go:141] libmachine: (old-k8s-version-531596) Calling .Close
	I1024 19:58:51.541588   61522 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.433103018s)
	I1024 19:58:51.541620   61522 main.go:141] libmachine: Making call to close driver server
	I1024 19:58:51.541638   61522 main.go:141] libmachine: (old-k8s-version-531596) Calling .Close
	I1024 19:58:51.541649   61522 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.310288463s)
	I1024 19:58:51.541680   61522 main.go:141] libmachine: Making call to close driver server
	I1024 19:58:51.541699   61522 main.go:141] libmachine: (old-k8s-version-531596) Calling .Close
	I1024 19:58:51.541913   61522 main.go:141] libmachine: Successfully made call to close driver server
	I1024 19:58:51.541939   61522 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 19:58:51.541950   61522 main.go:141] libmachine: Making call to close driver server
	I1024 19:58:51.541958   61522 main.go:141] libmachine: (old-k8s-version-531596) Calling .Close
	I1024 19:58:51.543557   61522 main.go:141] libmachine: (old-k8s-version-531596) DBG | Closing plugin on server side
	I1024 19:58:51.543566   61522 main.go:141] libmachine: (old-k8s-version-531596) DBG | Closing plugin on server side
	I1024 19:58:51.543561   61522 main.go:141] libmachine: (old-k8s-version-531596) DBG | Closing plugin on server side
	I1024 19:58:51.543577   61522 main.go:141] libmachine: Successfully made call to close driver server
	I1024 19:58:51.543593   61522 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 19:58:51.543598   61522 main.go:141] libmachine: Successfully made call to close driver server
	I1024 19:58:51.543599   61522 main.go:141] libmachine: Successfully made call to close driver server
	I1024 19:58:51.543609   61522 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 19:58:51.543610   61522 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 19:58:51.543619   61522 main.go:141] libmachine: Making call to close driver server
	I1024 19:58:51.543621   61522 main.go:141] libmachine: Making call to close driver server
	I1024 19:58:51.543628   61522 main.go:141] libmachine: (old-k8s-version-531596) Calling .Close
	I1024 19:58:51.543630   61522 main.go:141] libmachine: (old-k8s-version-531596) Calling .Close
	I1024 19:58:51.543849   61522 main.go:141] libmachine: Successfully made call to close driver server
	I1024 19:58:51.543861   61522 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 19:58:51.543977   61522 main.go:141] libmachine: Successfully made call to close driver server
	I1024 19:58:51.543981   61522 main.go:141] libmachine: (old-k8s-version-531596) DBG | Closing plugin on server side
	I1024 19:58:51.543993   61522 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 19:58:51.562840   61522 api_server.go:279] https://192.168.72.163:8443/healthz returned 200:
	ok
	I1024 19:58:51.569288   61522 api_server.go:141] control plane version: v1.16.0
	I1024 19:58:51.569311   61522 api_server.go:131] duration metric: took 27.744861ms to wait for apiserver health ...
	I1024 19:58:51.569321   61522 system_pods.go:43] waiting for kube-system pods to appear ...
	I1024 19:58:51.572702   61522 main.go:141] libmachine: Making call to close driver server
	I1024 19:58:51.572722   61522 main.go:141] libmachine: (old-k8s-version-531596) Calling .Close
	I1024 19:58:51.572994   61522 main.go:141] libmachine: (old-k8s-version-531596) DBG | Closing plugin on server side
	I1024 19:58:51.573042   61522 main.go:141] libmachine: Successfully made call to close driver server
	I1024 19:58:51.573056   61522 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 19:58:51.574468   61522 system_pods.go:59] 3 kube-system pods found
	I1024 19:58:51.574490   61522 system_pods.go:61] "coredns-5644d7b6d9-vkwz4" [d75af10c-1985-45b1-b407-96d753d975ea] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1024 19:58:51.574498   61522 system_pods.go:61] "kube-proxy-ddtqv" [5ebe5f06-33df-4875-9e08-f48ad9395b92] Running
	I1024 19:58:51.574503   61522 system_pods.go:61] "storage-provisioner" [3873881e-f7bb-4c78-beb6-c069e6781cf5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1024 19:58:51.574510   61522 system_pods.go:74] duration metric: took 5.181877ms to wait for pod list to return data ...
	I1024 19:58:51.574516   61522 default_sa.go:34] waiting for default service account to be created ...
	I1024 19:58:51.584577   61522 default_sa.go:45] found service account: "default"
	I1024 19:58:51.584603   61522 default_sa.go:55] duration metric: took 10.078521ms for default service account to be created ...
	I1024 19:58:51.584613   61522 system_pods.go:116] waiting for k8s-apps to be running ...
	I1024 19:58:51.588793   61522 system_pods.go:86] 3 kube-system pods found
	I1024 19:58:51.588822   61522 system_pods.go:89] "coredns-5644d7b6d9-vkwz4" [d75af10c-1985-45b1-b407-96d753d975ea] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1024 19:58:51.588832   61522 system_pods.go:89] "kube-proxy-ddtqv" [5ebe5f06-33df-4875-9e08-f48ad9395b92] Running
	I1024 19:58:51.588841   61522 system_pods.go:89] "storage-provisioner" [3873881e-f7bb-4c78-beb6-c069e6781cf5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1024 19:58:51.588866   61522 retry.go:31] will retry after 204.757256ms: missing components: kube-dns, etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1024 19:58:51.814870   61522 system_pods.go:86] 3 kube-system pods found
	I1024 19:58:51.814897   61522 system_pods.go:89] "coredns-5644d7b6d9-vkwz4" [d75af10c-1985-45b1-b407-96d753d975ea] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1024 19:58:51.814904   61522 system_pods.go:89] "kube-proxy-ddtqv" [5ebe5f06-33df-4875-9e08-f48ad9395b92] Running
	I1024 19:58:51.814910   61522 system_pods.go:89] "storage-provisioner" [3873881e-f7bb-4c78-beb6-c069e6781cf5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1024 19:58:51.814925   61522 retry.go:31] will retry after 343.031691ms: missing components: kube-dns, etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1024 19:58:52.275439   61522 system_pods.go:86] 3 kube-system pods found
	I1024 19:58:52.275475   61522 system_pods.go:89] "coredns-5644d7b6d9-vkwz4" [d75af10c-1985-45b1-b407-96d753d975ea] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1024 19:58:52.275486   61522 system_pods.go:89] "kube-proxy-ddtqv" [5ebe5f06-33df-4875-9e08-f48ad9395b92] Running
	I1024 19:58:52.275495   61522 system_pods.go:89] "storage-provisioner" [3873881e-f7bb-4c78-beb6-c069e6781cf5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1024 19:58:52.275514   61522 retry.go:31] will retry after 379.468422ms: missing components: kube-dns, etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1024 19:58:52.816569   61522 system_pods.go:86] 3 kube-system pods found
	I1024 19:58:52.816606   61522 system_pods.go:89] "coredns-5644d7b6d9-vkwz4" [d75af10c-1985-45b1-b407-96d753d975ea] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1024 19:58:52.816618   61522 system_pods.go:89] "kube-proxy-ddtqv" [5ebe5f06-33df-4875-9e08-f48ad9395b92] Running
	I1024 19:58:52.816629   61522 system_pods.go:89] "storage-provisioner" [3873881e-f7bb-4c78-beb6-c069e6781cf5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1024 19:58:52.816651   61522 retry.go:31] will retry after 582.41301ms: missing components: kube-dns, etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1024 19:58:52.973186   61522 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.309983381s)
	I1024 19:58:52.973251   61522 main.go:141] libmachine: Making call to close driver server
	I1024 19:58:52.973264   61522 main.go:141] libmachine: (old-k8s-version-531596) Calling .Close
	I1024 19:58:52.973552   61522 main.go:141] libmachine: Successfully made call to close driver server
	I1024 19:58:52.973576   61522 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 19:58:52.973587   61522 main.go:141] libmachine: Making call to close driver server
	I1024 19:58:52.973597   61522 main.go:141] libmachine: (old-k8s-version-531596) Calling .Close
	I1024 19:58:52.973954   61522 main.go:141] libmachine: (old-k8s-version-531596) DBG | Closing plugin on server side
	I1024 19:58:52.973964   61522 main.go:141] libmachine: Successfully made call to close driver server
	I1024 19:58:52.973978   61522 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 19:58:52.973995   61522 addons.go:467] Verifying addon metrics-server=true in "old-k8s-version-531596"
	I1024 19:58:53.086114   61522 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.78521141s)
	I1024 19:58:53.086173   61522 main.go:141] libmachine: Making call to close driver server
	I1024 19:58:53.086185   61522 main.go:141] libmachine: (old-k8s-version-531596) Calling .Close
	I1024 19:58:53.086460   61522 main.go:141] libmachine: (old-k8s-version-531596) DBG | Closing plugin on server side
	I1024 19:58:53.086500   61522 main.go:141] libmachine: Successfully made call to close driver server
	I1024 19:58:53.086519   61522 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 19:58:53.086529   61522 main.go:141] libmachine: Making call to close driver server
	I1024 19:58:53.086542   61522 main.go:141] libmachine: (old-k8s-version-531596) Calling .Close
	I1024 19:58:53.086774   61522 main.go:141] libmachine: Successfully made call to close driver server
	I1024 19:58:53.086795   61522 main.go:141] libmachine: Making call to close connection to plugin binary
	I1024 19:58:53.088281   61522 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-531596 addons enable metrics-server	
	
	
	I1024 19:58:53.089825   61522 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I1024 19:58:53.091388   61522 addons.go:502] enable addons completed in 3.238180617s: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I1024 19:58:53.413531   61522 system_pods.go:86] 4 kube-system pods found
	I1024 19:58:53.413560   61522 system_pods.go:89] "coredns-5644d7b6d9-vkwz4" [d75af10c-1985-45b1-b407-96d753d975ea] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1024 19:58:53.413568   61522 system_pods.go:89] "kube-proxy-ddtqv" [5ebe5f06-33df-4875-9e08-f48ad9395b92] Running
	I1024 19:58:53.413573   61522 system_pods.go:89] "metrics-server-74d5856cc6-klqjh" [cb87274d-5115-4efc-9b33-cf4037cc5124] Pending
	I1024 19:58:53.413579   61522 system_pods.go:89] "storage-provisioner" [3873881e-f7bb-4c78-beb6-c069e6781cf5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1024 19:58:53.413595   61522 retry.go:31] will retry after 473.519322ms: missing components: kube-dns, etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1024 19:58:53.902186   61522 system_pods.go:86] 4 kube-system pods found
	I1024 19:58:53.902219   61522 system_pods.go:89] "coredns-5644d7b6d9-vkwz4" [d75af10c-1985-45b1-b407-96d753d975ea] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1024 19:58:53.902231   61522 system_pods.go:89] "kube-proxy-ddtqv" [5ebe5f06-33df-4875-9e08-f48ad9395b92] Running
	I1024 19:58:53.902240   61522 system_pods.go:89] "metrics-server-74d5856cc6-klqjh" [cb87274d-5115-4efc-9b33-cf4037cc5124] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 19:58:53.902248   61522 system_pods.go:89] "storage-provisioner" [3873881e-f7bb-4c78-beb6-c069e6781cf5] Running
	I1024 19:58:53.902272   61522 retry.go:31] will retry after 900.198974ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1024 19:58:54.808875   61522 system_pods.go:86] 4 kube-system pods found
	I1024 19:58:54.808901   61522 system_pods.go:89] "coredns-5644d7b6d9-vkwz4" [d75af10c-1985-45b1-b407-96d753d975ea] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1024 19:58:54.808909   61522 system_pods.go:89] "kube-proxy-ddtqv" [5ebe5f06-33df-4875-9e08-f48ad9395b92] Running
	I1024 19:58:54.808916   61522 system_pods.go:89] "metrics-server-74d5856cc6-klqjh" [cb87274d-5115-4efc-9b33-cf4037cc5124] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 19:58:54.808921   61522 system_pods.go:89] "storage-provisioner" [3873881e-f7bb-4c78-beb6-c069e6781cf5] Running
	I1024 19:58:54.808935   61522 retry.go:31] will retry after 984.601052ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1024 19:58:55.797667   61522 system_pods.go:86] 4 kube-system pods found
	I1024 19:58:55.797695   61522 system_pods.go:89] "coredns-5644d7b6d9-vkwz4" [d75af10c-1985-45b1-b407-96d753d975ea] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1024 19:58:55.797706   61522 system_pods.go:89] "kube-proxy-ddtqv" [5ebe5f06-33df-4875-9e08-f48ad9395b92] Running
	I1024 19:58:55.797714   61522 system_pods.go:89] "metrics-server-74d5856cc6-klqjh" [cb87274d-5115-4efc-9b33-cf4037cc5124] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 19:58:55.797724   61522 system_pods.go:89] "storage-provisioner" [3873881e-f7bb-4c78-beb6-c069e6781cf5] Running
	I1024 19:58:55.797741   61522 retry.go:31] will retry after 1.429909001s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1024 19:58:57.234683   61522 system_pods.go:86] 4 kube-system pods found
	I1024 19:58:57.234720   61522 system_pods.go:89] "coredns-5644d7b6d9-vkwz4" [d75af10c-1985-45b1-b407-96d753d975ea] Running
	I1024 19:58:57.234731   61522 system_pods.go:89] "kube-proxy-ddtqv" [5ebe5f06-33df-4875-9e08-f48ad9395b92] Running
	I1024 19:58:57.234741   61522 system_pods.go:89] "metrics-server-74d5856cc6-klqjh" [cb87274d-5115-4efc-9b33-cf4037cc5124] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 19:58:57.234758   61522 system_pods.go:89] "storage-provisioner" [3873881e-f7bb-4c78-beb6-c069e6781cf5] Running
	I1024 19:58:57.234778   61522 retry.go:31] will retry after 1.247060445s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1024 19:58:58.486621   61522 system_pods.go:86] 4 kube-system pods found
	I1024 19:58:58.486644   61522 system_pods.go:89] "coredns-5644d7b6d9-vkwz4" [d75af10c-1985-45b1-b407-96d753d975ea] Running
	I1024 19:58:58.486649   61522 system_pods.go:89] "kube-proxy-ddtqv" [5ebe5f06-33df-4875-9e08-f48ad9395b92] Running
	I1024 19:58:58.486655   61522 system_pods.go:89] "metrics-server-74d5856cc6-klqjh" [cb87274d-5115-4efc-9b33-cf4037cc5124] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 19:58:58.486665   61522 system_pods.go:89] "storage-provisioner" [3873881e-f7bb-4c78-beb6-c069e6781cf5] Running
	I1024 19:58:58.486680   61522 retry.go:31] will retry after 1.654348021s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1024 19:59:00.145755   61522 system_pods.go:86] 4 kube-system pods found
	I1024 19:59:00.145784   61522 system_pods.go:89] "coredns-5644d7b6d9-vkwz4" [d75af10c-1985-45b1-b407-96d753d975ea] Running
	I1024 19:59:00.145794   61522 system_pods.go:89] "kube-proxy-ddtqv" [5ebe5f06-33df-4875-9e08-f48ad9395b92] Running
	I1024 19:59:00.145800   61522 system_pods.go:89] "metrics-server-74d5856cc6-klqjh" [cb87274d-5115-4efc-9b33-cf4037cc5124] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 19:59:00.145807   61522 system_pods.go:89] "storage-provisioner" [3873881e-f7bb-4c78-beb6-c069e6781cf5] Running
	I1024 19:59:00.145821   61522 retry.go:31] will retry after 2.474157889s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1024 19:59:02.624557   61522 system_pods.go:86] 4 kube-system pods found
	I1024 19:59:02.624584   61522 system_pods.go:89] "coredns-5644d7b6d9-vkwz4" [d75af10c-1985-45b1-b407-96d753d975ea] Running
	I1024 19:59:02.624589   61522 system_pods.go:89] "kube-proxy-ddtqv" [5ebe5f06-33df-4875-9e08-f48ad9395b92] Running
	I1024 19:59:02.624595   61522 system_pods.go:89] "metrics-server-74d5856cc6-klqjh" [cb87274d-5115-4efc-9b33-cf4037cc5124] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 19:59:02.624601   61522 system_pods.go:89] "storage-provisioner" [3873881e-f7bb-4c78-beb6-c069e6781cf5] Running
	I1024 19:59:02.624615   61522 retry.go:31] will retry after 3.471180356s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1024 19:59:06.100511   61522 system_pods.go:86] 4 kube-system pods found
	I1024 19:59:06.100538   61522 system_pods.go:89] "coredns-5644d7b6d9-vkwz4" [d75af10c-1985-45b1-b407-96d753d975ea] Running
	I1024 19:59:06.100543   61522 system_pods.go:89] "kube-proxy-ddtqv" [5ebe5f06-33df-4875-9e08-f48ad9395b92] Running
	I1024 19:59:06.100549   61522 system_pods.go:89] "metrics-server-74d5856cc6-klqjh" [cb87274d-5115-4efc-9b33-cf4037cc5124] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 19:59:06.100555   61522 system_pods.go:89] "storage-provisioner" [3873881e-f7bb-4c78-beb6-c069e6781cf5] Running
	I1024 19:59:06.100569   61522 retry.go:31] will retry after 2.947170633s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1024 19:59:09.053486   61522 system_pods.go:86] 4 kube-system pods found
	I1024 19:59:09.053514   61522 system_pods.go:89] "coredns-5644d7b6d9-vkwz4" [d75af10c-1985-45b1-b407-96d753d975ea] Running
	I1024 19:59:09.053519   61522 system_pods.go:89] "kube-proxy-ddtqv" [5ebe5f06-33df-4875-9e08-f48ad9395b92] Running
	I1024 19:59:09.053528   61522 system_pods.go:89] "metrics-server-74d5856cc6-klqjh" [cb87274d-5115-4efc-9b33-cf4037cc5124] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 19:59:09.053534   61522 system_pods.go:89] "storage-provisioner" [3873881e-f7bb-4c78-beb6-c069e6781cf5] Running
	I1024 19:59:09.053549   61522 retry.go:31] will retry after 5.0721197s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1024 19:59:14.131040   61522 system_pods.go:86] 4 kube-system pods found
	I1024 19:59:14.131067   61522 system_pods.go:89] "coredns-5644d7b6d9-vkwz4" [d75af10c-1985-45b1-b407-96d753d975ea] Running
	I1024 19:59:14.131072   61522 system_pods.go:89] "kube-proxy-ddtqv" [5ebe5f06-33df-4875-9e08-f48ad9395b92] Running
	I1024 19:59:14.131079   61522 system_pods.go:89] "metrics-server-74d5856cc6-klqjh" [cb87274d-5115-4efc-9b33-cf4037cc5124] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 19:59:14.131086   61522 system_pods.go:89] "storage-provisioner" [3873881e-f7bb-4c78-beb6-c069e6781cf5] Running
	I1024 19:59:14.131100   61522 retry.go:31] will retry after 5.567887087s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1024 19:59:19.703574   61522 system_pods.go:86] 4 kube-system pods found
	I1024 19:59:19.703601   61522 system_pods.go:89] "coredns-5644d7b6d9-vkwz4" [d75af10c-1985-45b1-b407-96d753d975ea] Running
	I1024 19:59:19.703608   61522 system_pods.go:89] "kube-proxy-ddtqv" [5ebe5f06-33df-4875-9e08-f48ad9395b92] Running
	I1024 19:59:19.703614   61522 system_pods.go:89] "metrics-server-74d5856cc6-klqjh" [cb87274d-5115-4efc-9b33-cf4037cc5124] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 19:59:19.703621   61522 system_pods.go:89] "storage-provisioner" [3873881e-f7bb-4c78-beb6-c069e6781cf5] Running
	I1024 19:59:19.703635   61522 retry.go:31] will retry after 8.391121227s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1024 19:59:28.100013   61522 system_pods.go:86] 4 kube-system pods found
	I1024 19:59:28.100037   61522 system_pods.go:89] "coredns-5644d7b6d9-vkwz4" [d75af10c-1985-45b1-b407-96d753d975ea] Running
	I1024 19:59:28.100042   61522 system_pods.go:89] "kube-proxy-ddtqv" [5ebe5f06-33df-4875-9e08-f48ad9395b92] Running
	I1024 19:59:28.100049   61522 system_pods.go:89] "metrics-server-74d5856cc6-klqjh" [cb87274d-5115-4efc-9b33-cf4037cc5124] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 19:59:28.100055   61522 system_pods.go:89] "storage-provisioner" [3873881e-f7bb-4c78-beb6-c069e6781cf5] Running
	I1024 19:59:28.100068   61522 retry.go:31] will retry after 8.523739132s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1024 19:59:36.631722   61522 system_pods.go:86] 5 kube-system pods found
	I1024 19:59:36.631746   61522 system_pods.go:89] "coredns-5644d7b6d9-vkwz4" [d75af10c-1985-45b1-b407-96d753d975ea] Running
	I1024 19:59:36.631752   61522 system_pods.go:89] "kube-controller-manager-old-k8s-version-531596" [ef260c6d-d8a5-4f09-9f9c-e8eedb93c9b9] Pending
	I1024 19:59:36.631756   61522 system_pods.go:89] "kube-proxy-ddtqv" [5ebe5f06-33df-4875-9e08-f48ad9395b92] Running
	I1024 19:59:36.631763   61522 system_pods.go:89] "metrics-server-74d5856cc6-klqjh" [cb87274d-5115-4efc-9b33-cf4037cc5124] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 19:59:36.631773   61522 system_pods.go:89] "storage-provisioner" [3873881e-f7bb-4c78-beb6-c069e6781cf5] Running
	I1024 19:59:36.631789   61522 retry.go:31] will retry after 9.749260788s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1024 19:59:46.386861   61522 system_pods.go:86] 7 kube-system pods found
	I1024 19:59:46.386886   61522 system_pods.go:89] "coredns-5644d7b6d9-vkwz4" [d75af10c-1985-45b1-b407-96d753d975ea] Running
	I1024 19:59:46.386892   61522 system_pods.go:89] "etcd-old-k8s-version-531596" [4e02bccd-85f2-4da6-a8cc-6b217369682d] Running
	I1024 19:59:46.386896   61522 system_pods.go:89] "kube-controller-manager-old-k8s-version-531596" [ef260c6d-d8a5-4f09-9f9c-e8eedb93c9b9] Running
	I1024 19:59:46.386900   61522 system_pods.go:89] "kube-proxy-ddtqv" [5ebe5f06-33df-4875-9e08-f48ad9395b92] Running
	I1024 19:59:46.386904   61522 system_pods.go:89] "kube-scheduler-old-k8s-version-531596" [67cc436f-cb69-4e57-a36d-097ff8a0871c] Pending
	I1024 19:59:46.386910   61522 system_pods.go:89] "metrics-server-74d5856cc6-klqjh" [cb87274d-5115-4efc-9b33-cf4037cc5124] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 19:59:46.386916   61522 system_pods.go:89] "storage-provisioner" [3873881e-f7bb-4c78-beb6-c069e6781cf5] Running
	I1024 19:59:46.386929   61522 retry.go:31] will retry after 11.95178335s: missing components: kube-apiserver, kube-scheduler
	I1024 19:59:58.345007   61522 system_pods.go:86] 8 kube-system pods found
	I1024 19:59:58.345032   61522 system_pods.go:89] "coredns-5644d7b6d9-vkwz4" [d75af10c-1985-45b1-b407-96d753d975ea] Running
	I1024 19:59:58.345037   61522 system_pods.go:89] "etcd-old-k8s-version-531596" [4e02bccd-85f2-4da6-a8cc-6b217369682d] Running
	I1024 19:59:58.345041   61522 system_pods.go:89] "kube-apiserver-old-k8s-version-531596" [73fd71ad-b2bd-423b-b6a8-a5df806651d6] Pending
	I1024 19:59:58.345047   61522 system_pods.go:89] "kube-controller-manager-old-k8s-version-531596" [ef260c6d-d8a5-4f09-9f9c-e8eedb93c9b9] Running
	I1024 19:59:58.345051   61522 system_pods.go:89] "kube-proxy-ddtqv" [5ebe5f06-33df-4875-9e08-f48ad9395b92] Running
	I1024 19:59:58.345055   61522 system_pods.go:89] "kube-scheduler-old-k8s-version-531596" [67cc436f-cb69-4e57-a36d-097ff8a0871c] Running
	I1024 19:59:58.345061   61522 system_pods.go:89] "metrics-server-74d5856cc6-klqjh" [cb87274d-5115-4efc-9b33-cf4037cc5124] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 19:59:58.345068   61522 system_pods.go:89] "storage-provisioner" [3873881e-f7bb-4c78-beb6-c069e6781cf5] Running
	I1024 19:59:58.345081   61522 retry.go:31] will retry after 20.338125917s: missing components: kube-apiserver
	I1024 20:00:18.690009   61522 system_pods.go:86] 8 kube-system pods found
	I1024 20:00:18.690047   61522 system_pods.go:89] "coredns-5644d7b6d9-vkwz4" [d75af10c-1985-45b1-b407-96d753d975ea] Running
	I1024 20:00:18.690052   61522 system_pods.go:89] "etcd-old-k8s-version-531596" [4e02bccd-85f2-4da6-a8cc-6b217369682d] Running
	I1024 20:00:18.690057   61522 system_pods.go:89] "kube-apiserver-old-k8s-version-531596" [73fd71ad-b2bd-423b-b6a8-a5df806651d6] Running
	I1024 20:00:18.690061   61522 system_pods.go:89] "kube-controller-manager-old-k8s-version-531596" [ef260c6d-d8a5-4f09-9f9c-e8eedb93c9b9] Running
	I1024 20:00:18.690064   61522 system_pods.go:89] "kube-proxy-ddtqv" [5ebe5f06-33df-4875-9e08-f48ad9395b92] Running
	I1024 20:00:18.690069   61522 system_pods.go:89] "kube-scheduler-old-k8s-version-531596" [67cc436f-cb69-4e57-a36d-097ff8a0871c] Running
	I1024 20:00:18.690076   61522 system_pods.go:89] "metrics-server-74d5856cc6-klqjh" [cb87274d-5115-4efc-9b33-cf4037cc5124] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1024 20:00:18.690083   61522 system_pods.go:89] "storage-provisioner" [3873881e-f7bb-4c78-beb6-c069e6781cf5] Running
	I1024 20:00:18.690090   61522 system_pods.go:126] duration metric: took 1m27.105471966s to wait for k8s-apps to be running ...
	I1024 20:00:18.690097   61522 system_svc.go:44] waiting for kubelet service to be running ....
	I1024 20:00:18.690140   61522 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1024 20:00:18.706955   61522 system_svc.go:56] duration metric: took 16.847612ms WaitForService to wait for kubelet.
	I1024 20:00:18.706977   61522 kubeadm.go:581] duration metric: took 1m28.784569914s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1024 20:00:18.706996   61522 node_conditions.go:102] verifying NodePressure condition ...
	I1024 20:00:18.710353   61522 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1024 20:00:18.710376   61522 node_conditions.go:123] node cpu capacity is 2
	I1024 20:00:18.710387   61522 node_conditions.go:105] duration metric: took 3.387349ms to run NodePressure ...
	I1024 20:00:18.710397   61522 start.go:228] waiting for startup goroutines ...
	I1024 20:00:18.710403   61522 start.go:233] waiting for cluster config update ...
	I1024 20:00:18.710415   61522 start.go:242] writing updated cluster config ...
	I1024 20:00:18.710703   61522 ssh_runner.go:195] Run: rm -f paused
	I1024 20:00:18.758540   61522 start.go:600] kubectl: 1.28.3, cluster: 1.16.0 (minor skew: 12)
	I1024 20:00:18.760757   61522 out.go:177] 
	W1024 20:00:18.762209   61522 out.go:239] ! /usr/local/bin/kubectl is version 1.28.3, which may have incompatibilities with Kubernetes 1.16.0.
	I1024 20:00:18.763612   61522 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I1024 20:00:18.765183   61522 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-531596" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Journal begins at Tue 2023-10-24 19:52:45 UTC, ends at Tue 2023-10-24 20:00:30 UTC. --
	Oct 24 19:59:09 old-k8s-version-531596 dockerd[1220]: time="2023-10-24T19:59:09.332083815Z" level=info msg="shim disconnected" id=662366ef39f6d73b43c59eeccd1bc6e283fc341675e4eee7dff3c375dbb04438 namespace=moby
	Oct 24 19:59:09 old-k8s-version-531596 dockerd[1220]: time="2023-10-24T19:59:09.332216881Z" level=warning msg="cleaning up after shim disconnected" id=662366ef39f6d73b43c59eeccd1bc6e283fc341675e4eee7dff3c375dbb04438 namespace=moby
	Oct 24 19:59:09 old-k8s-version-531596 dockerd[1220]: time="2023-10-24T19:59:09.332274458Z" level=info msg="cleaning up dead shim" namespace=moby
	Oct 24 19:59:31 old-k8s-version-531596 dockerd[1220]: time="2023-10-24T19:59:31.256703517Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 24 19:59:31 old-k8s-version-531596 dockerd[1220]: time="2023-10-24T19:59:31.256796202Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 24 19:59:31 old-k8s-version-531596 dockerd[1220]: time="2023-10-24T19:59:31.256818198Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 24 19:59:31 old-k8s-version-531596 dockerd[1220]: time="2023-10-24T19:59:31.256830712Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 24 19:59:31 old-k8s-version-531596 dockerd[1214]: time="2023-10-24T19:59:31.652235674Z" level=info msg="ignoring event" container=c57fa5f8d6d2eb83ff1071dfac02c4adfc971cbe900f35f8373ff3d55a5f190b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 24 19:59:31 old-k8s-version-531596 dockerd[1220]: time="2023-10-24T19:59:31.653208169Z" level=info msg="shim disconnected" id=c57fa5f8d6d2eb83ff1071dfac02c4adfc971cbe900f35f8373ff3d55a5f190b namespace=moby
	Oct 24 19:59:31 old-k8s-version-531596 dockerd[1220]: time="2023-10-24T19:59:31.653251843Z" level=warning msg="cleaning up after shim disconnected" id=c57fa5f8d6d2eb83ff1071dfac02c4adfc971cbe900f35f8373ff3d55a5f190b namespace=moby
	Oct 24 19:59:31 old-k8s-version-531596 dockerd[1220]: time="2023-10-24T19:59:31.653260119Z" level=info msg="cleaning up dead shim" namespace=moby
	Oct 24 19:59:35 old-k8s-version-531596 dockerd[1214]: time="2023-10-24T19:59:35.186077603Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Oct 24 19:59:35 old-k8s-version-531596 dockerd[1214]: time="2023-10-24T19:59:35.186103589Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Oct 24 19:59:35 old-k8s-version-531596 dockerd[1214]: time="2023-10-24T19:59:35.189591522Z" level=error msg="Handler for POST /images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Oct 24 20:00:06 old-k8s-version-531596 dockerd[1220]: time="2023-10-24T20:00:06.271866930Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 24 20:00:06 old-k8s-version-531596 dockerd[1220]: time="2023-10-24T20:00:06.271982301Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 24 20:00:06 old-k8s-version-531596 dockerd[1220]: time="2023-10-24T20:00:06.271996929Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 24 20:00:06 old-k8s-version-531596 dockerd[1220]: time="2023-10-24T20:00:06.272008807Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 24 20:00:06 old-k8s-version-531596 dockerd[1220]: time="2023-10-24T20:00:06.672324602Z" level=info msg="shim disconnected" id=aa57d016c2a117dac12df021d2f88994d35d4a916203dae30827b67a1c55c1d4 namespace=moby
	Oct 24 20:00:06 old-k8s-version-531596 dockerd[1220]: time="2023-10-24T20:00:06.672509277Z" level=warning msg="cleaning up after shim disconnected" id=aa57d016c2a117dac12df021d2f88994d35d4a916203dae30827b67a1c55c1d4 namespace=moby
	Oct 24 20:00:06 old-k8s-version-531596 dockerd[1220]: time="2023-10-24T20:00:06.672537992Z" level=info msg="cleaning up dead shim" namespace=moby
	Oct 24 20:00:06 old-k8s-version-531596 dockerd[1214]: time="2023-10-24T20:00:06.674080703Z" level=info msg="ignoring event" container=aa57d016c2a117dac12df021d2f88994d35d4a916203dae30827b67a1c55c1d4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 24 20:00:28 old-k8s-version-531596 dockerd[1214]: time="2023-10-24T20:00:28.209007227Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Oct 24 20:00:28 old-k8s-version-531596 dockerd[1214]: time="2023-10-24T20:00:28.209044429Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Oct 24 20:00:28 old-k8s-version-531596 dockerd[1214]: time="2023-10-24T20:00:28.212189314Z" level=error msg="Handler for POST /images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	
	* 
	* ==> container status <==
	* time="2023-10-24T20:00:30Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/dockershim.sock\": rpc error: code = Unimplemented desc = unknown service runtime.v1.RuntimeService"
	CONTAINER ID   IMAGE                    COMMAND                  CREATED              STATUS                      PORTS     NAMES
	aa57d016c2a1   a90209bb39e3             "nginx -g 'daemon of…"   24 seconds ago       Exited (1) 23 seconds ago             k8s_dashboard-metrics-scraper_dashboard-metrics-scraper-d6b4b5544-bvr9r_kubernetes-dashboard_685ebd0b-3a1f-47ba-9405-6a5aea0d16ee_3
	be1180be37fd   kubernetesui/dashboard   "/dashboard --insecu…"   About a minute ago   Up About a minute                     k8s_kubernetes-dashboard_kubernetes-dashboard-84b68f675b-vv7dw_kubernetes-dashboard_7f908da8-db9e-4f8a-a3b3-843575ba8806_0
	cbbf83f9aea4   k8s.gcr.io/pause:3.1     "/pause"                 About a minute ago   Up About a minute                     k8s_POD_dashboard-metrics-scraper-d6b4b5544-bvr9r_kubernetes-dashboard_685ebd0b-3a1f-47ba-9405-6a5aea0d16ee_0
	41d55e41794a   k8s.gcr.io/pause:3.1     "/pause"                 About a minute ago   Up About a minute                     k8s_POD_kubernetes-dashboard-84b68f675b-vv7dw_kubernetes-dashboard_7f908da8-db9e-4f8a-a3b3-843575ba8806_0
	68ed41b02d00   k8s.gcr.io/pause:3.1     "/pause"                 About a minute ago   Up About a minute                     k8s_POD_metrics-server-74d5856cc6-klqjh_kube-system_cb87274d-5115-4efc-9b33-cf4037cc5124_0
	3f0919e1b5ce   6e38f40d628d             "/storage-provisioner"   About a minute ago   Up About a minute                     k8s_storage-provisioner_storage-provisioner_kube-system_3873881e-f7bb-4c78-beb6-c069e6781cf5_0
	55797cde172a   bf261d157914             "/coredns -conf /etc…"   About a minute ago   Up About a minute                     k8s_coredns_coredns-5644d7b6d9-vkwz4_kube-system_d75af10c-1985-45b1-b407-96d753d975ea_0
	c66b488941b8   k8s.gcr.io/pause:3.1     "/pause"                 About a minute ago   Up About a minute                     k8s_POD_storage-provisioner_kube-system_3873881e-f7bb-4c78-beb6-c069e6781cf5_0
	c59df400a757   k8s.gcr.io/pause:3.1     "/pause"                 About a minute ago   Up About a minute                     k8s_POD_coredns-5644d7b6d9-vkwz4_kube-system_d75af10c-1985-45b1-b407-96d753d975ea_0
	b1d416887ae6   c21b0c7400f9             "/usr/local/bin/kube…"   About a minute ago   Up About a minute                     k8s_kube-proxy_kube-proxy-ddtqv_kube-system_5ebe5f06-33df-4875-9e08-f48ad9395b92_0
	13d4ac0cdc2e   k8s.gcr.io/pause:3.1     "/pause"                 About a minute ago   Up About a minute                     k8s_POD_kube-proxy-ddtqv_kube-system_5ebe5f06-33df-4875-9e08-f48ad9395b92_0
	7e2dc941526d   06a629a7e51c             "kube-controller-man…"   2 minutes ago        Up 2 minutes                          k8s_kube-controller-manager_kube-controller-manager-old-k8s-version-531596_kube-system_7376ddb4f190a0ded9394063437bcb4e_0
	52085ad7ca91   b2756210eeab             "etcd --advertise-cl…"   2 minutes ago        Up 2 minutes                          k8s_etcd_etcd-old-k8s-version-531596_kube-system_96c4c1a37aa9cd999d45cef6be5dd030_0
	df740a7cf91b   301ddc62b80b             "kube-scheduler --au…"   2 minutes ago        Up 2 minutes                          k8s_kube-scheduler_kube-scheduler-old-k8s-version-531596_kube-system_b3d303074fe0ca1d42a8bd9ed248df09_0
	8555353c0092   b305571ca60a             "kube-apiserver --ad…"   2 minutes ago        Up 2 minutes                          k8s_kube-apiserver_kube-apiserver-old-k8s-version-531596_kube-system_415a62c7bcf79a478c356cd000cbe1f7_0
	1e33861a8460   k8s.gcr.io/pause:3.1     "/pause"                 2 minutes ago        Up 2 minutes                          k8s_POD_etcd-old-k8s-version-531596_kube-system_96c4c1a37aa9cd999d45cef6be5dd030_0
	8e8910837837   k8s.gcr.io/pause:3.1     "/pause"                 2 minutes ago        Up 2 minutes                          k8s_POD_kube-scheduler-old-k8s-version-531596_kube-system_b3d303074fe0ca1d42a8bd9ed248df09_0
	684943c62036   k8s.gcr.io/pause:3.1     "/pause"                 2 minutes ago        Up 2 minutes                          k8s_POD_kube-controller-manager-old-k8s-version-531596_kube-system_7376ddb4f190a0ded9394063437bcb4e_0
	8c516670e1b1   k8s.gcr.io/pause:3.1     "/pause"                 2 minutes ago        Up 2 minutes                          k8s_POD_kube-apiserver-old-k8s-version-531596_kube-system_415a62c7bcf79a478c356cd000cbe1f7_0
	
	* 
	* ==> coredns [55797cde172a] <==
	* .:53
	2023-10-24T19:58:52.665Z [INFO] plugin/reload: Running configuration MD5 = 6485d707d03bc60ccfd5c7f4afc8c245
	2023-10-24T19:58:52.665Z [INFO] CoreDNS-1.6.2
	2023-10-24T19:58:52.665Z [INFO] linux/amd64, go1.12.8, 795a3eb
	CoreDNS-1.6.2
	linux/amd64, go1.12.8, 795a3eb
	2023-10-24T19:58:52.696Z [INFO] 127.0.0.1:37226 - 39803 "HINFO IN 4829596949240437622.5335352742132593665. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.029862116s
	
	* 
	* ==> describe nodes <==
	* Name:               old-k8s-version-531596
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-531596
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=88664ba50fde9b6a83229504adac261395c47fca
	                    minikube.k8s.io/name=old-k8s-version-531596
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_10_24T19_58_35_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 24 Oct 2023 19:58:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 24 Oct 2023 19:59:30 +0000   Tue, 24 Oct 2023 19:58:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 24 Oct 2023 19:59:30 +0000   Tue, 24 Oct 2023 19:58:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 24 Oct 2023 19:59:30 +0000   Tue, 24 Oct 2023 19:58:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 24 Oct 2023 19:59:30 +0000   Tue, 24 Oct 2023 19:58:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.163
	  Hostname:    old-k8s-version-531596
	Capacity:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	Allocatable:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	System Info:
	 Machine ID:                 33d71aa077e04f4d8362d894fb8d9f3f
	 System UUID:                33d71aa0-77e0-4f4d-8362-d894fb8d9f3f
	 Boot ID:                    08b8f8d8-114a-4505-93f2-cda77b7e44a1
	 Kernel Version:             5.10.57
	 OS Image:                   Buildroot 2021.02.12
	 Operating System:           linux
	 Architecture:               amd64
	 Container Runtime Version:  docker://24.0.6
	 Kubelet Version:            v1.16.0
	 Kube-Proxy Version:         v1.16.0
	PodCIDR:                     10.244.0.0/24
	PodCIDRs:                    10.244.0.0/24
	Non-terminated Pods:         (10 in total)
	  Namespace                  Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                  ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                coredns-5644d7b6d9-vkwz4                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     100s
	  kube-system                etcd-old-k8s-version-531596                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         52s
	  kube-system                kube-apiserver-old-k8s-version-531596             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         34s
	  kube-system                kube-controller-manager-old-k8s-version-531596    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         56s
	  kube-system                kube-proxy-ddtqv                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         100s
	  kube-system                kube-scheduler-old-k8s-version-531596             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         44s
	  kube-system                metrics-server-74d5856cc6-klqjh                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         97s
	  kube-system                storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         99s
	  kubernetes-dashboard       dashboard-metrics-scraper-d6b4b5544-bvr9r         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         97s
	  kubernetes-dashboard       kubernetes-dashboard-84b68f675b-vv7dw             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         97s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                750m (37%!)(MISSING)   0 (0%!)(MISSING)
	  memory             270Mi (12%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From                                Message
	  ----    ------                   ----                 ----                                -------
	  Normal  NodeHasSufficientMemory  2m6s (x8 over 2m6s)  kubelet, old-k8s-version-531596     Node old-k8s-version-531596 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m6s (x8 over 2m6s)  kubelet, old-k8s-version-531596     Node old-k8s-version-531596 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m6s (x7 over 2m6s)  kubelet, old-k8s-version-531596     Node old-k8s-version-531596 status is now: NodeHasSufficientPID
	  Normal  Starting                 99s                  kube-proxy, old-k8s-version-531596  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [Oct24 19:52] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.073162] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.578250] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.547118] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.157075] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.599117] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000015] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.599214] systemd-fstab-generator[514]: Ignoring "noauto" for root device
	[  +0.166125] systemd-fstab-generator[531]: Ignoring "noauto" for root device
	[  +1.380494] systemd-fstab-generator[802]: Ignoring "noauto" for root device
	[  +0.396389] systemd-fstab-generator[839]: Ignoring "noauto" for root device
	[  +0.121256] systemd-fstab-generator[850]: Ignoring "noauto" for root device
	[  +0.172347] systemd-fstab-generator[863]: Ignoring "noauto" for root device
	[Oct24 19:53] systemd-fstab-generator[1185]: Ignoring "noauto" for root device
	[  +2.468971] kauditd_printk_skb: 67 callbacks suppressed
	[ +13.224732] systemd-fstab-generator[1660]: Ignoring "noauto" for root device
	[  +0.512662] kauditd_printk_skb: 29 callbacks suppressed
	[  +0.212037] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[ +25.303937] kauditd_printk_skb: 5 callbacks suppressed
	[Oct24 19:58] systemd-fstab-generator[7016]: Ignoring "noauto" for root device
	
	* 
	* ==> etcd [52085ad7ca91] <==
	* 2023-10-24 19:58:26.081611 I | raft: 3dd8974a0ddcfcd8 became follower at term 0
	2023-10-24 19:58:26.081618 I | raft: newRaft 3dd8974a0ddcfcd8 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	2023-10-24 19:58:26.081621 I | raft: 3dd8974a0ddcfcd8 became follower at term 1
	2023-10-24 19:58:26.181056 W | auth: simple token is not cryptographically signed
	2023-10-24 19:58:26.435944 I | etcdserver: starting server... [version: 3.3.15, cluster version: to_be_decided]
	2023-10-24 19:58:26.543702 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, ca = , trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-10-24 19:58:26.605399 I | etcdserver: 3dd8974a0ddcfcd8 as single-node; fast-forwarding 9 ticks (election ticks 10)
	2023-10-24 19:58:26.606134 I | etcdserver/membership: added member 3dd8974a0ddcfcd8 [https://192.168.72.163:2380] to cluster 31866a174e81d2aa
	2023-10-24 19:58:26.606784 I | embed: listening for metrics on http://192.168.72.163:2381
	2023-10-24 19:58:26.607138 I | embed: listening for metrics on http://127.0.0.1:2381
	2023-10-24 19:58:26.656806 I | raft: 3dd8974a0ddcfcd8 is starting a new election at term 1
	2023-10-24 19:58:26.657061 I | raft: 3dd8974a0ddcfcd8 became candidate at term 2
	2023-10-24 19:58:26.657181 I | raft: 3dd8974a0ddcfcd8 received MsgVoteResp from 3dd8974a0ddcfcd8 at term 2
	2023-10-24 19:58:26.657346 I | raft: 3dd8974a0ddcfcd8 became leader at term 2
	2023-10-24 19:58:26.657517 I | raft: raft.node: 3dd8974a0ddcfcd8 elected leader 3dd8974a0ddcfcd8 at term 2
	2023-10-24 19:58:26.658163 I | etcdserver: setting up the initial cluster version to 3.3
	2023-10-24 19:58:26.660047 N | etcdserver/membership: set the initial cluster version to 3.3
	2023-10-24 19:58:26.660720 I | etcdserver/api: enabled capabilities for version 3.3
	2023-10-24 19:58:26.660868 I | etcdserver: published {Name:old-k8s-version-531596 ClientURLs:[https://192.168.72.163:2379]} to cluster 31866a174e81d2aa
	2023-10-24 19:58:26.660980 I | embed: ready to serve client requests
	2023-10-24 19:58:26.661360 I | embed: ready to serve client requests
	2023-10-24 19:58:26.662857 I | embed: serving client requests on 192.168.72.163:2379
	2023-10-24 19:58:26.670744 I | embed: serving client requests on 127.0.0.1:2379
	2023-10-24 19:58:34.230507 W | etcdserver: request "header:<ID:18219465550992188158 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/roles/kube-system/kubeadm:kubelet-config-1.16\" mod_revision:0 > success:<request_put:<key:\"/registry/roles/kube-system/kubeadm:kubelet-config-1.16\" value_size:193 >> failure:<>>" with result "size:16" took too long (110.140897ms) to execute
	2023-10-24 19:59:06.298380 W | etcdserver: read-only range request "key:\"/registry/events/kube-system/metrics-server-74d5856cc6-klqjh.17912378511bd19e\" " with result "range_response_count:1 size:533" took too long (107.330974ms) to execute
	
	* 
	* ==> kernel <==
	*  20:00:30 up 7 min,  0 users,  load average: 0.95, 0.85, 0.41
	Linux old-k8s-version-531596 5.10.57 #1 SMP Mon Oct 16 20:35:28 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [8555353c0092] <==
	* I1024 19:58:31.261845       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1024 19:58:31.269119       1 storage_scheduling.go:139] created PriorityClass system-node-critical with value 2000001000
	I1024 19:58:31.280353       1 storage_scheduling.go:139] created PriorityClass system-cluster-critical with value 2000000000
	I1024 19:58:31.280398       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I1024 19:58:33.043121       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1024 19:58:33.323800       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W1024 19:58:33.630616       1 lease.go:222] Resetting endpoints for master service "kubernetes" to [192.168.72.163]
	I1024 19:58:33.631601       1 controller.go:606] quota admission added evaluator for: endpoints
	I1024 19:58:33.670899       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
	I1024 19:58:34.554917       1 controller.go:606] quota admission added evaluator for: serviceaccounts
	I1024 19:58:35.362074       1 controller.go:606] quota admission added evaluator for: deployments.apps
	I1024 19:58:35.693012       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
	I1024 19:58:50.101871       1 controller.go:606] quota admission added evaluator for: replicasets.apps
	I1024 19:58:50.128942       1 controller.go:606] quota admission added evaluator for: events.events.k8s.io
	I1024 19:58:50.233844       1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
	I1024 19:58:54.233645       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1024 19:58:54.233736       1 handler_proxy.go:99] no RequestInfo found in the context
	E1024 19:58:54.233799       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1024 19:58:54.233806       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1024 19:59:54.234133       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1024 19:59:54.234277       1 handler_proxy.go:99] no RequestInfo found in the context
	E1024 19:59:54.234334       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1024 19:59:54.234341       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [7e2dc941526d] <==
	* I1024 19:58:52.783243       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-d6b4b5544", UID:"11a4d3f2-05f3-4e06-bb0c-0526bf3271f5", APIVersion:"apps/v1", ResourceVersion:"398", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "dashboard-metrics-scraper-d6b4b5544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1024 19:58:52.783258       1 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard", UID:"5cab9194-ccf0-4d7d-b644-eceee30d83fa", APIVersion:"apps/v1", ResourceVersion:"399", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set kubernetes-dashboard-84b68f675b to 1
	E1024 19:58:52.841181       1 replica_set.go:450] Sync "kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544" failed with pods "dashboard-metrics-scraper-d6b4b5544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1024 19:58:52.843354       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-d6b4b5544", UID:"11a4d3f2-05f3-4e06-bb0c-0526bf3271f5", APIVersion:"apps/v1", ResourceVersion:"398", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "dashboard-metrics-scraper-d6b4b5544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1024 19:58:52.843981       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-84b68f675b", UID:"e142ab1f-aea0-4cb8-aaa0-b0755297e54d", APIVersion:"apps/v1", ResourceVersion:"401", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-84b68f675b-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E1024 19:58:52.856357       1 replica_set.go:450] Sync "kubernetes-dashboard/kubernetes-dashboard-84b68f675b" failed with pods "kubernetes-dashboard-84b68f675b-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E1024 19:58:52.872566       1 replica_set.go:450] Sync "kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544" failed with pods "dashboard-metrics-scraper-d6b4b5544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1024 19:58:52.872939       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-d6b4b5544", UID:"11a4d3f2-05f3-4e06-bb0c-0526bf3271f5", APIVersion:"apps/v1", ResourceVersion:"398", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "dashboard-metrics-scraper-d6b4b5544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E1024 19:58:52.879393       1 replica_set.go:450] Sync "kubernetes-dashboard/kubernetes-dashboard-84b68f675b" failed with pods "kubernetes-dashboard-84b68f675b-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1024 19:58:52.880067       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-84b68f675b", UID:"e142ab1f-aea0-4cb8-aaa0-b0755297e54d", APIVersion:"apps/v1", ResourceVersion:"407", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-84b68f675b-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E1024 19:58:52.887008       1 replica_set.go:450] Sync "kubernetes-dashboard/kubernetes-dashboard-84b68f675b" failed with pods "kubernetes-dashboard-84b68f675b-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1024 19:58:52.887086       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-84b68f675b", UID:"e142ab1f-aea0-4cb8-aaa0-b0755297e54d", APIVersion:"apps/v1", ResourceVersion:"407", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-84b68f675b-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E1024 19:58:52.895161       1 replica_set.go:450] Sync "kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544" failed with pods "dashboard-metrics-scraper-d6b4b5544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1024 19:58:52.895187       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-d6b4b5544", UID:"11a4d3f2-05f3-4e06-bb0c-0526bf3271f5", APIVersion:"apps/v1", ResourceVersion:"398", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "dashboard-metrics-scraper-d6b4b5544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E1024 19:58:52.907937       1 replica_set.go:450] Sync "kubernetes-dashboard/kubernetes-dashboard-84b68f675b" failed with pods "kubernetes-dashboard-84b68f675b-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1024 19:58:52.908087       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-84b68f675b", UID:"e142ab1f-aea0-4cb8-aaa0-b0755297e54d", APIVersion:"apps/v1", ResourceVersion:"407", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-84b68f675b-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1024 19:58:53.404778       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"metrics-server-74d5856cc6", UID:"586e6413-6e1d-41ff-9f2d-847332a9e8b8", APIVersion:"apps/v1", ResourceVersion:"374", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: metrics-server-74d5856cc6-klqjh
	I1024 19:58:53.959825       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-d6b4b5544", UID:"11a4d3f2-05f3-4e06-bb0c-0526bf3271f5", APIVersion:"apps/v1", ResourceVersion:"398", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: dashboard-metrics-scraper-d6b4b5544-bvr9r
	I1024 19:58:53.988506       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-84b68f675b", UID:"e142ab1f-aea0-4cb8-aaa0-b0755297e54d", APIVersion:"apps/v1", ResourceVersion:"407", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kubernetes-dashboard-84b68f675b-vv7dw
	E1024 19:59:20.508110       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1024 19:59:22.276014       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1024 19:59:50.760503       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1024 19:59:54.277763       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1024 20:00:21.012526       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1024 20:00:26.279667       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [b1d416887ae6] <==
	* W1024 19:58:51.551719       1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
	I1024 19:58:51.588193       1 node.go:135] Successfully retrieved node IP: 192.168.72.163
	I1024 19:58:51.588265       1 server_others.go:149] Using iptables Proxier.
	I1024 19:58:51.588840       1 server.go:529] Version: v1.16.0
	I1024 19:58:51.608258       1 config.go:131] Starting endpoints config controller
	I1024 19:58:51.608316       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I1024 19:58:51.611872       1 config.go:313] Starting service config controller
	I1024 19:58:51.614081       1 shared_informer.go:197] Waiting for caches to sync for service config
	I1024 19:58:51.708666       1 shared_informer.go:204] Caches are synced for endpoints config 
	I1024 19:58:51.723730       1 shared_informer.go:204] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [df740a7cf91b] <==
	* I1024 19:58:30.370706       1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
	I1024 19:58:30.374538       1 secure_serving.go:123] Serving securely on 127.0.0.1:10259
	E1024 19:58:30.436560       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1024 19:58:30.437382       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1024 19:58:30.436690       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1024 19:58:30.436855       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1024 19:58:30.436971       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1024 19:58:30.437085       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1024 19:58:30.437156       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1024 19:58:30.437197       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1024 19:58:30.437204       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1024 19:58:30.437241       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1024 19:58:30.437306       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1024 19:58:31.438909       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1024 19:58:31.440384       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1024 19:58:31.441704       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1024 19:58:31.443060       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1024 19:58:31.449288       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1024 19:58:31.450239       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1024 19:58:31.452915       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1024 19:58:31.454104       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1024 19:58:31.458780       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1024 19:58:31.460800       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1024 19:58:31.464902       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1024 19:58:50.148741       1 factory.go:585] pod is already present in the activeQ
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Tue 2023-10-24 19:52:45 UTC, ends at Tue 2023-10-24 20:00:30 UTC. --
	Oct 24 19:59:17 old-k8s-version-531596 kubelet[7022]: E1024 19:59:17.808523    7022 pod_workers.go:191] Error syncing pod 685ebd0b-3a1f-47ba-9405-6a5aea0d16ee ("dashboard-metrics-scraper-d6b4b5544-bvr9r_kubernetes-dashboard(685ebd0b-3a1f-47ba-9405-6a5aea0d16ee)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-d6b4b5544-bvr9r_kubernetes-dashboard(685ebd0b-3a1f-47ba-9405-6a5aea0d16ee)"
	Oct 24 19:59:22 old-k8s-version-531596 kubelet[7022]: E1024 19:59:22.182369    7022 pod_workers.go:191] Error syncing pod cb87274d-5115-4efc-9b33-cf4037cc5124 ("metrics-server-74d5856cc6-klqjh_kube-system(cb87274d-5115-4efc-9b33-cf4037cc5124)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 24 19:59:31 old-k8s-version-531596 kubelet[7022]: W1024 19:59:31.701273    7022 container.go:409] Failed to create summary reader for "/kubepods/besteffort/pod685ebd0b-3a1f-47ba-9405-6a5aea0d16ee/c57fa5f8d6d2eb83ff1071dfac02c4adfc971cbe900f35f8373ff3d55a5f190b": none of the resources are being tracked.
	Oct 24 19:59:32 old-k8s-version-531596 kubelet[7022]: W1024 19:59:32.026525    7022 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544-bvr9r through plugin: invalid network status for
	Oct 24 19:59:32 old-k8s-version-531596 kubelet[7022]: E1024 19:59:32.032569    7022 pod_workers.go:191] Error syncing pod 685ebd0b-3a1f-47ba-9405-6a5aea0d16ee ("dashboard-metrics-scraper-d6b4b5544-bvr9r_kubernetes-dashboard(685ebd0b-3a1f-47ba-9405-6a5aea0d16ee)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-d6b4b5544-bvr9r_kubernetes-dashboard(685ebd0b-3a1f-47ba-9405-6a5aea0d16ee)"
	Oct 24 19:59:33 old-k8s-version-531596 kubelet[7022]: W1024 19:59:33.039849    7022 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544-bvr9r through plugin: invalid network status for
	Oct 24 19:59:35 old-k8s-version-531596 kubelet[7022]: E1024 19:59:35.190265    7022 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host
	Oct 24 19:59:35 old-k8s-version-531596 kubelet[7022]: E1024 19:59:35.190729    7022 kuberuntime_image.go:50] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host
	Oct 24 19:59:35 old-k8s-version-531596 kubelet[7022]: E1024 19:59:35.190850    7022 kuberuntime_manager.go:783] container start failed: ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host
	Oct 24 19:59:35 old-k8s-version-531596 kubelet[7022]: E1024 19:59:35.190935    7022 pod_workers.go:191] Error syncing pod cb87274d-5115-4efc-9b33-cf4037cc5124 ("metrics-server-74d5856cc6-klqjh_kube-system(cb87274d-5115-4efc-9b33-cf4037cc5124)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Oct 24 19:59:37 old-k8s-version-531596 kubelet[7022]: E1024 19:59:37.806982    7022 pod_workers.go:191] Error syncing pod 685ebd0b-3a1f-47ba-9405-6a5aea0d16ee ("dashboard-metrics-scraper-d6b4b5544-bvr9r_kubernetes-dashboard(685ebd0b-3a1f-47ba-9405-6a5aea0d16ee)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-d6b4b5544-bvr9r_kubernetes-dashboard(685ebd0b-3a1f-47ba-9405-6a5aea0d16ee)"
	Oct 24 19:59:50 old-k8s-version-531596 kubelet[7022]: E1024 19:59:50.178561    7022 pod_workers.go:191] Error syncing pod cb87274d-5115-4efc-9b33-cf4037cc5124 ("metrics-server-74d5856cc6-klqjh_kube-system(cb87274d-5115-4efc-9b33-cf4037cc5124)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 24 19:59:51 old-k8s-version-531596 kubelet[7022]: E1024 19:59:51.172367    7022 pod_workers.go:191] Error syncing pod 685ebd0b-3a1f-47ba-9405-6a5aea0d16ee ("dashboard-metrics-scraper-d6b4b5544-bvr9r_kubernetes-dashboard(685ebd0b-3a1f-47ba-9405-6a5aea0d16ee)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-d6b4b5544-bvr9r_kubernetes-dashboard(685ebd0b-3a1f-47ba-9405-6a5aea0d16ee)"
	Oct 24 20:00:02 old-k8s-version-531596 kubelet[7022]: E1024 20:00:02.188643    7022 pod_workers.go:191] Error syncing pod cb87274d-5115-4efc-9b33-cf4037cc5124 ("metrics-server-74d5856cc6-klqjh_kube-system(cb87274d-5115-4efc-9b33-cf4037cc5124)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 24 20:00:06 old-k8s-version-531596 kubelet[7022]: W1024 20:00:06.303271    7022 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544-bvr9r through plugin: invalid network status for
	Oct 24 20:00:07 old-k8s-version-531596 kubelet[7022]: W1024 20:00:07.652352    7022 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544-bvr9r through plugin: invalid network status for
	Oct 24 20:00:07 old-k8s-version-531596 kubelet[7022]: E1024 20:00:07.659080    7022 pod_workers.go:191] Error syncing pod 685ebd0b-3a1f-47ba-9405-6a5aea0d16ee ("dashboard-metrics-scraper-d6b4b5544-bvr9r_kubernetes-dashboard(685ebd0b-3a1f-47ba-9405-6a5aea0d16ee)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-d6b4b5544-bvr9r_kubernetes-dashboard(685ebd0b-3a1f-47ba-9405-6a5aea0d16ee)"
	Oct 24 20:00:08 old-k8s-version-531596 kubelet[7022]: W1024 20:00:08.667778    7022 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544-bvr9r through plugin: invalid network status for
	Oct 24 20:00:08 old-k8s-version-531596 kubelet[7022]: E1024 20:00:08.675537    7022 pod_workers.go:191] Error syncing pod 685ebd0b-3a1f-47ba-9405-6a5aea0d16ee ("dashboard-metrics-scraper-d6b4b5544-bvr9r_kubernetes-dashboard(685ebd0b-3a1f-47ba-9405-6a5aea0d16ee)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-d6b4b5544-bvr9r_kubernetes-dashboard(685ebd0b-3a1f-47ba-9405-6a5aea0d16ee)"
	Oct 24 20:00:13 old-k8s-version-531596 kubelet[7022]: E1024 20:00:13.173952    7022 pod_workers.go:191] Error syncing pod cb87274d-5115-4efc-9b33-cf4037cc5124 ("metrics-server-74d5856cc6-klqjh_kube-system(cb87274d-5115-4efc-9b33-cf4037cc5124)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 24 20:00:23 old-k8s-version-531596 kubelet[7022]: E1024 20:00:23.172641    7022 pod_workers.go:191] Error syncing pod 685ebd0b-3a1f-47ba-9405-6a5aea0d16ee ("dashboard-metrics-scraper-d6b4b5544-bvr9r_kubernetes-dashboard(685ebd0b-3a1f-47ba-9405-6a5aea0d16ee)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-d6b4b5544-bvr9r_kubernetes-dashboard(685ebd0b-3a1f-47ba-9405-6a5aea0d16ee)"
	Oct 24 20:00:28 old-k8s-version-531596 kubelet[7022]: E1024 20:00:28.213098    7022 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host
	Oct 24 20:00:28 old-k8s-version-531596 kubelet[7022]: E1024 20:00:28.213634    7022 kuberuntime_image.go:50] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host
	Oct 24 20:00:28 old-k8s-version-531596 kubelet[7022]: E1024 20:00:28.213777    7022 kuberuntime_manager.go:783] container start failed: ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host
	Oct 24 20:00:28 old-k8s-version-531596 kubelet[7022]: E1024 20:00:28.213865    7022 pod_workers.go:191] Error syncing pod cb87274d-5115-4efc-9b33-cf4037cc5124 ("metrics-server-74d5856cc6-klqjh_kube-system(cb87274d-5115-4efc-9b33-cf4037cc5124)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	
	* 
	* ==> kubernetes-dashboard [be1180be37fd] <==
	* 2023/10/24 19:59:02 Starting overwatch
	2023/10/24 19:59:02 Using namespace: kubernetes-dashboard
	2023/10/24 19:59:02 Using in-cluster config to connect to apiserver
	2023/10/24 19:59:02 Using secret token for csrf signing
	2023/10/24 19:59:02 Initializing csrf token from kubernetes-dashboard-csrf secret
	2023/10/24 19:59:02 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2023/10/24 19:59:02 Successful initial request to the apiserver, version: v1.16.0
	2023/10/24 19:59:02 Generating JWE encryption key
	2023/10/24 19:59:02 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2023/10/24 19:59:02 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2023/10/24 19:59:02 Initializing JWE encryption key from synchronized object
	2023/10/24 19:59:02 Creating in-cluster Sidecar client
	2023/10/24 19:59:02 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2023/10/24 19:59:02 Serving insecurely on HTTP port: 9090
	2023/10/24 19:59:32 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2023/10/24 20:00:02 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	* 
	* ==> storage-provisioner [3f0919e1b5ce] <==
	* I1024 19:58:52.994127       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1024 19:58:53.059260       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1024 19:58:53.059347       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1024 19:58:53.096034       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1024 19:58:53.097069       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-531596_7ddd9b3e-5a96-4586-8cbb-1cba110bf952!
	I1024 19:58:53.215976       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-531596_7ddd9b3e-5a96-4586-8cbb-1cba110bf952!
	I1024 19:58:53.111119       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3a523971-09c6-4e63-8e35-df422c562cea", APIVersion:"v1", ResourceVersion:"437", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-531596_7ddd9b3e-5a96-4586-8cbb-1cba110bf952 became leader
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-531596 -n old-k8s-version-531596
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-531596 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-74d5856cc6-klqjh
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-531596 describe pod metrics-server-74d5856cc6-klqjh
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-531596 describe pod metrics-server-74d5856cc6-klqjh: exit status 1 (65.864537ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-74d5856cc6-klqjh" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-531596 describe pod metrics-server-74d5856cc6-klqjh: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (2.11s)

                                                
                                    

Test pass (283/321)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 8.77
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.08
10 TestDownloadOnly/v1.28.3/json-events 7.99
11 TestDownloadOnly/v1.28.3/preload-exists 0
15 TestDownloadOnly/v1.28.3/LogsDuration 0.07
16 TestDownloadOnly/DeleteAll 0.14
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.14
19 TestBinaryMirror 0.58
20 TestOffline 101.4
23 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
24 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
25 TestAddons/Setup 154.72
27 TestAddons/parallel/Registry 17.97
28 TestAddons/parallel/Ingress 21.7
29 TestAddons/parallel/InspektorGadget 10.78
30 TestAddons/parallel/MetricsServer 5.72
31 TestAddons/parallel/HelmTiller 11.42
33 TestAddons/parallel/CSI 94.06
34 TestAddons/parallel/Headlamp 16.65
35 TestAddons/parallel/CloudSpanner 5.79
36 TestAddons/parallel/LocalPath 55.5
37 TestAddons/parallel/NvidiaDevicePlugin 5.49
40 TestAddons/serial/GCPAuth/Namespaces 0.14
41 TestAddons/StoppedEnableDisable 13.4
42 TestCertOptions 85.11
43 TestCertExpiration 378.13
44 TestDockerFlags 87.54
45 TestForceSystemdFlag 55.43
46 TestForceSystemdEnv 77.71
48 TestKVMDriverInstallOrUpdate 3.53
52 TestErrorSpam/setup 51.08
53 TestErrorSpam/start 0.38
54 TestErrorSpam/status 0.85
55 TestErrorSpam/pause 1.24
56 TestErrorSpam/unpause 1.39
57 TestErrorSpam/stop 4.26
60 TestFunctional/serial/CopySyncFile 0
61 TestFunctional/serial/StartWithProxy 65.21
62 TestFunctional/serial/AuditLog 0
63 TestFunctional/serial/SoftStart 39.57
64 TestFunctional/serial/KubeContext 0.04
65 TestFunctional/serial/KubectlGetPods 0.09
68 TestFunctional/serial/CacheCmd/cache/add_remote 2.29
69 TestFunctional/serial/CacheCmd/cache/add_local 1.39
70 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
71 TestFunctional/serial/CacheCmd/cache/list 0.06
72 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.25
73 TestFunctional/serial/CacheCmd/cache/cache_reload 1.27
74 TestFunctional/serial/CacheCmd/cache/delete 0.12
75 TestFunctional/serial/MinikubeKubectlCmd 0.12
76 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
77 TestFunctional/serial/ExtraConfig 38.88
78 TestFunctional/serial/ComponentHealth 0.06
79 TestFunctional/serial/LogsCmd 1.16
80 TestFunctional/serial/LogsFileCmd 1.1
81 TestFunctional/serial/InvalidService 4.26
83 TestFunctional/parallel/ConfigCmd 0.43
84 TestFunctional/parallel/DashboardCmd 16.76
85 TestFunctional/parallel/DryRun 0.29
86 TestFunctional/parallel/InternationalLanguage 0.16
87 TestFunctional/parallel/StatusCmd 0.88
91 TestFunctional/parallel/ServiceCmdConnect 32.54
92 TestFunctional/parallel/AddonsCmd 0.14
93 TestFunctional/parallel/PersistentVolumeClaim 59.93
95 TestFunctional/parallel/SSHCmd 0.51
96 TestFunctional/parallel/CpCmd 1.06
97 TestFunctional/parallel/MySQL 38.5
98 TestFunctional/parallel/FileSync 0.22
99 TestFunctional/parallel/CertSync 1.69
103 TestFunctional/parallel/NodeLabels 0.07
105 TestFunctional/parallel/NonActiveRuntimeDisabled 0.25
107 TestFunctional/parallel/License 0.2
108 TestFunctional/parallel/Version/short 0.08
109 TestFunctional/parallel/Version/components 0.84
119 TestFunctional/parallel/ImageCommands/ImageListShort 0.23
120 TestFunctional/parallel/ImageCommands/ImageListTable 0.3
121 TestFunctional/parallel/ImageCommands/ImageListJson 0.26
122 TestFunctional/parallel/ImageCommands/ImageListYaml 0.27
123 TestFunctional/parallel/ImageCommands/ImageBuild 3.99
124 TestFunctional/parallel/ImageCommands/Setup 1.34
125 TestFunctional/parallel/DockerEnv/bash 1
126 TestFunctional/parallel/UpdateContextCmd/no_changes 0.11
127 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.11
128 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.11
129 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 5.35
130 TestFunctional/parallel/ProfileCmd/profile_not_create 0.32
131 TestFunctional/parallel/ProfileCmd/profile_list 0.29
132 TestFunctional/parallel/ProfileCmd/profile_json_output 0.3
133 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.97
134 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 5.59
135 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.96
136 TestFunctional/parallel/ImageCommands/ImageRemove 0.52
137 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.83
138 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.93
139 TestFunctional/parallel/ServiceCmd/DeployApp 23.22
140 TestFunctional/parallel/MountCmd/any-port 8.1
141 TestFunctional/parallel/MountCmd/specific-port 1.94
142 TestFunctional/parallel/ServiceCmd/List 1.31
143 TestFunctional/parallel/MountCmd/VerifyCleanup 1.6
144 TestFunctional/parallel/ServiceCmd/JSONOutput 1.41
145 TestFunctional/parallel/ServiceCmd/HTTPS 0.41
146 TestFunctional/parallel/ServiceCmd/Format 0.37
147 TestFunctional/parallel/ServiceCmd/URL 0.43
148 TestFunctional/delete_addon-resizer_images 0.07
149 TestFunctional/delete_my-image_image 0.02
150 TestFunctional/delete_minikube_cached_images 0.02
151 TestGvisorAddon 314.52
154 TestImageBuild/serial/Setup 52.65
155 TestImageBuild/serial/NormalBuild 1.6
156 TestImageBuild/serial/BuildWithBuildArg 1.32
157 TestImageBuild/serial/BuildWithDockerIgnore 0.38
158 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.29
161 TestIngressAddonLegacy/StartLegacyK8sCluster 76.75
163 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 17.4
164 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.64
165 TestIngressAddonLegacy/serial/ValidateIngressAddons 38.27
168 TestJSONOutput/start/Command 70.55
169 TestJSONOutput/start/Audit 0
171 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
172 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
174 TestJSONOutput/pause/Command 0.58
175 TestJSONOutput/pause/Audit 0
177 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
178 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
180 TestJSONOutput/unpause/Command 0.56
181 TestJSONOutput/unpause/Audit 0
183 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
184 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
186 TestJSONOutput/stop/Command 8.11
187 TestJSONOutput/stop/Audit 0
189 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
190 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
191 TestErrorJSONOutput 0.21
196 TestMainNoArgs 0.06
197 TestMinikubeProfile 108.39
200 TestMountStart/serial/StartWithMountFirst 28.69
201 TestMountStart/serial/VerifyMountFirst 0.39
202 TestMountStart/serial/StartWithMountSecond 29.45
203 TestMountStart/serial/VerifyMountSecond 0.4
204 TestMountStart/serial/DeleteFirst 0.89
205 TestMountStart/serial/VerifyMountPostDelete 0.41
206 TestMountStart/serial/Stop 11.17
207 TestMountStart/serial/RestartStopped 24.5
208 TestMountStart/serial/VerifyMountPostStop 0.4
211 TestMultiNode/serial/FreshStart2Nodes 132.85
212 TestMultiNode/serial/DeployApp2Nodes 6.38
213 TestMultiNode/serial/PingHostFrom2Pods 0.93
214 TestMultiNode/serial/AddNode 52.82
215 TestMultiNode/serial/ProfileList 0.22
216 TestMultiNode/serial/CopyFile 7.79
217 TestMultiNode/serial/StopNode 4.07
218 TestMultiNode/serial/StartAfterStop 32.36
219 TestMultiNode/serial/RestartKeepsNodes 174.43
220 TestMultiNode/serial/DeleteNode 1.73
221 TestMultiNode/serial/StopMultiNode 25.67
222 TestMultiNode/serial/RestartMultiNode 135.67
223 TestMultiNode/serial/ValidateNameConflict 52.05
228 TestPreload 176.11
230 TestScheduledStopUnix 124.98
231 TestSkaffold 139.22
234 TestRunningBinaryUpgrade 190.55
236 TestKubernetesUpgrade 208.56
249 TestStoppedBinaryUpgrade/Setup 0.52
251 TestStoppedBinaryUpgrade/MinikubeLogs 3.21
253 TestPause/serial/Start 121.99
261 TestPause/serial/SecondStartNoReconfiguration 44.68
263 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
264 TestNoKubernetes/serial/StartWithK8s 62.03
265 TestPause/serial/Pause 0.62
266 TestPause/serial/VerifyStatus 0.26
267 TestPause/serial/Unpause 0.56
268 TestPause/serial/PauseAgain 0.72
269 TestPause/serial/DeletePaused 1.13
270 TestPause/serial/VerifyDeletedResources 3.32
271 TestNoKubernetes/serial/StartWithStopK8s 31.75
272 TestNoKubernetes/serial/Start 28.93
273 TestNetworkPlugins/group/auto/Start 126.89
274 TestNetworkPlugins/group/kindnet/Start 114.9
275 TestNoKubernetes/serial/VerifyK8sNotRunning 0.22
276 TestNoKubernetes/serial/ProfileList 0.74
277 TestNoKubernetes/serial/Stop 2.15
278 TestNoKubernetes/serial/StartNoArgs 79.24
279 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.23
280 TestNetworkPlugins/group/calico/Start 107.78
281 TestNetworkPlugins/group/auto/KubeletFlags 0.36
282 TestNetworkPlugins/group/auto/NetCatPod 12.59
283 TestNetworkPlugins/group/kindnet/ControllerPod 5.02
284 TestNetworkPlugins/group/custom-flannel/Start 90.01
285 TestNetworkPlugins/group/auto/DNS 0.22
286 TestNetworkPlugins/group/auto/Localhost 0.18
287 TestNetworkPlugins/group/auto/HairPin 0.18
288 TestNetworkPlugins/group/kindnet/KubeletFlags 0.29
289 TestNetworkPlugins/group/kindnet/NetCatPod 14.37
290 TestNetworkPlugins/group/kindnet/DNS 0.29
291 TestNetworkPlugins/group/kindnet/Localhost 0.22
292 TestNetworkPlugins/group/kindnet/HairPin 0.19
293 TestNetworkPlugins/group/false/Start 94.86
294 TestNetworkPlugins/group/enable-default-cni/Start 108.81
295 TestNetworkPlugins/group/calico/ControllerPod 5.03
296 TestNetworkPlugins/group/calico/KubeletFlags 0.24
297 TestNetworkPlugins/group/calico/NetCatPod 12.43
298 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.25
299 TestNetworkPlugins/group/custom-flannel/NetCatPod 12.58
300 TestNetworkPlugins/group/calico/DNS 0.21
301 TestNetworkPlugins/group/calico/Localhost 0.18
302 TestNetworkPlugins/group/calico/HairPin 0.18
303 TestNetworkPlugins/group/custom-flannel/DNS 0.28
304 TestNetworkPlugins/group/custom-flannel/Localhost 0.18
305 TestNetworkPlugins/group/custom-flannel/HairPin 0.19
306 TestNetworkPlugins/group/false/KubeletFlags 0.3
307 TestNetworkPlugins/group/false/NetCatPod 14.63
308 TestNetworkPlugins/group/flannel/Start 84.11
309 TestNetworkPlugins/group/bridge/Start 104.55
310 TestNetworkPlugins/group/false/DNS 0.2
311 TestNetworkPlugins/group/false/Localhost 0.16
312 TestNetworkPlugins/group/false/HairPin 0.15
313 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.25
314 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.51
315 TestNetworkPlugins/group/kubenet/Start 114.15
316 TestNetworkPlugins/group/enable-default-cni/DNS 0.22
317 TestNetworkPlugins/group/enable-default-cni/Localhost 0.17
318 TestNetworkPlugins/group/enable-default-cni/HairPin 0.19
320 TestStartStop/group/old-k8s-version/serial/FirstStart 179.34
321 TestNetworkPlugins/group/flannel/ControllerPod 5.02
322 TestNetworkPlugins/group/flannel/KubeletFlags 0.23
323 TestNetworkPlugins/group/flannel/NetCatPod 12.43
324 TestNetworkPlugins/group/flannel/DNS 0.25
325 TestNetworkPlugins/group/flannel/Localhost 0.24
326 TestNetworkPlugins/group/flannel/HairPin 0.24
327 TestNetworkPlugins/group/bridge/KubeletFlags 0.23
328 TestNetworkPlugins/group/bridge/NetCatPod 13.45
330 TestStartStop/group/no-preload/serial/FirstStart 104.38
331 TestNetworkPlugins/group/bridge/DNS 0.23
332 TestNetworkPlugins/group/bridge/Localhost 0.19
333 TestNetworkPlugins/group/bridge/HairPin 0.2
334 TestNetworkPlugins/group/kubenet/KubeletFlags 0.45
335 TestNetworkPlugins/group/kubenet/NetCatPod 11.55
337 TestStartStop/group/embed-certs/serial/FirstStart 123.78
338 TestNetworkPlugins/group/kubenet/DNS 0.2
339 TestNetworkPlugins/group/kubenet/Localhost 0.15
340 TestNetworkPlugins/group/kubenet/HairPin 0.15
342 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 119.69
343 TestStartStop/group/no-preload/serial/DeployApp 10.56
344 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.44
345 TestStartStop/group/old-k8s-version/serial/DeployApp 9.51
346 TestStartStop/group/no-preload/serial/Stop 13.16
347 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.86
348 TestStartStop/group/old-k8s-version/serial/Stop 13.13
349 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.21
350 TestStartStop/group/no-preload/serial/SecondStart 335.32
351 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.24
352 TestStartStop/group/old-k8s-version/serial/SecondStart 477.32
353 TestStartStop/group/embed-certs/serial/DeployApp 10.44
354 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.21
355 TestStartStop/group/embed-certs/serial/Stop 13.14
356 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.24
357 TestStartStop/group/embed-certs/serial/SecondStart 336.29
358 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.6
359 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.39
360 TestStartStop/group/default-k8s-diff-port/serial/Stop 13.14
361 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.24
368 TestStartStop/group/newest-cni/serial/FirstStart 75.29
369 TestStartStop/group/newest-cni/serial/DeployApp 0
370 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.11
371 TestStartStop/group/newest-cni/serial/Stop 8.12
372 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.22
373 TestStartStop/group/newest-cni/serial/SecondStart 50.54
374 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
375 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
376 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.29
377 TestStartStop/group/newest-cni/serial/Pause 2.76
378 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 21.02
379 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.09
380 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.28
381 TestStartStop/group/no-preload/serial/Pause 2.61
382 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 16.02
383 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.1
384 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.3
385 TestStartStop/group/embed-certs/serial/Pause 2.71
386 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.02
387 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.08
389 TestStartStop/group/old-k8s-version/serial/Pause 2.43
x
+
TestDownloadOnly/v1.16.0/json-events (8.77s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-055097 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=kvm2 
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-055097 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=kvm2 : (8.773653421s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (8.77s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-055097
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-055097: exit status 85 (74.813836ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-055097 | jenkins | v1.31.2 | 24 Oct 23 19:00 UTC |          |
	|         | -p download-only-055097        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/24 19:00:51
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.21.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1024 19:00:51.926456   16454 out.go:296] Setting OutFile to fd 1 ...
	I1024 19:00:51.926675   16454 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 19:00:51.926683   16454 out.go:309] Setting ErrFile to fd 2...
	I1024 19:00:51.926687   16454 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 19:00:51.926839   16454 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17485-9104/.minikube/bin
	W1024 19:00:51.926967   16454 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17485-9104/.minikube/config/config.json: open /home/jenkins/minikube-integration/17485-9104/.minikube/config/config.json: no such file or directory
	I1024 19:00:51.927519   16454 out.go:303] Setting JSON to true
	I1024 19:00:51.928356   16454 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":2350,"bootTime":1698171702,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1024 19:00:51.928411   16454 start.go:138] virtualization: kvm guest
	I1024 19:00:51.930949   16454 out.go:97] [download-only-055097] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	W1024 19:00:51.931054   16454 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/17485-9104/.minikube/cache/preloaded-tarball: no such file or directory
	I1024 19:00:51.932567   16454 out.go:169] MINIKUBE_LOCATION=17485
	I1024 19:00:51.931088   16454 notify.go:220] Checking for updates...
	I1024 19:00:51.935355   16454 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1024 19:00:51.936737   16454 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17485-9104/kubeconfig
	I1024 19:00:51.938019   16454 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17485-9104/.minikube
	I1024 19:00:51.939309   16454 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1024 19:00:51.941598   16454 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1024 19:00:51.941804   16454 driver.go:378] Setting default libvirt URI to qemu:///system
	I1024 19:00:52.044171   16454 out.go:97] Using the kvm2 driver based on user configuration
	I1024 19:00:52.044200   16454 start.go:298] selected driver: kvm2
	I1024 19:00:52.044206   16454 start.go:902] validating driver "kvm2" against <nil>
	I1024 19:00:52.044522   16454 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1024 19:00:52.044665   16454 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17485-9104/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1024 19:00:52.059451   16454 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I1024 19:00:52.059503   16454 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1024 19:00:52.060054   16454 start_flags.go:386] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I1024 19:00:52.060199   16454 start_flags.go:908] Wait components to verify : map[apiserver:true system_pods:true]
	I1024 19:00:52.060253   16454 cni.go:84] Creating CNI manager for ""
	I1024 19:00:52.060272   16454 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1024 19:00:52.060283   16454 start_flags.go:323] config:
	{Name:download-only-055097 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-055097 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunt
ime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1024 19:00:52.060476   16454 iso.go:125] acquiring lock: {Name:mkf528b771f12bbaddd502db30db0ccdeec4a711 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1024 19:00:52.062593   16454 out.go:97] Downloading VM boot image ...
	I1024 19:00:52.062638   16454 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-amd64.iso.sha256 -> /home/jenkins/minikube-integration/17485-9104/.minikube/cache/iso/amd64/minikube-v1.31.0-1697471113-17434-amd64.iso
	I1024 19:00:54.958869   16454 out.go:97] Starting control plane node download-only-055097 in cluster download-only-055097
	I1024 19:00:54.958893   16454 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1024 19:00:54.986064   16454 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I1024 19:00:54.986092   16454 cache.go:57] Caching tarball of preloaded images
	I1024 19:00:54.986314   16454 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1024 19:00:54.988351   16454 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I1024 19:00:54.988364   16454 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I1024 19:00:55.015102   16454 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4?checksum=md5:326f3ce331abb64565b50b8c9e791244 -> /home/jenkins/minikube-integration/17485-9104/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-055097"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/json-events (7.99s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-055097 --force --alsologtostderr --kubernetes-version=v1.28.3 --container-runtime=docker --driver=kvm2 
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-055097 --force --alsologtostderr --kubernetes-version=v1.28.3 --container-runtime=docker --driver=kvm2 : (7.985965456s)
--- PASS: TestDownloadOnly/v1.28.3/json-events (7.99s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/preload-exists
--- PASS: TestDownloadOnly/v1.28.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-055097
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-055097: exit status 85 (70.335363ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-055097 | jenkins | v1.31.2 | 24 Oct 23 19:00 UTC |          |
	|         | -p download-only-055097        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-055097 | jenkins | v1.31.2 | 24 Oct 23 19:01 UTC |          |
	|         | -p download-only-055097        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.3   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/24 19:01:00
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.21.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1024 19:01:00.775823   16512 out.go:296] Setting OutFile to fd 1 ...
	I1024 19:01:00.775926   16512 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 19:01:00.775936   16512 out.go:309] Setting ErrFile to fd 2...
	I1024 19:01:00.775941   16512 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 19:01:00.776141   16512 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17485-9104/.minikube/bin
	W1024 19:01:00.776240   16512 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17485-9104/.minikube/config/config.json: open /home/jenkins/minikube-integration/17485-9104/.minikube/config/config.json: no such file or directory
	I1024 19:01:00.776636   16512 out.go:303] Setting JSON to true
	I1024 19:01:00.777503   16512 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":2359,"bootTime":1698171702,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1024 19:01:00.777561   16512 start.go:138] virtualization: kvm guest
	I1024 19:01:00.779880   16512 out.go:97] [download-only-055097] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I1024 19:01:00.781585   16512 out.go:169] MINIKUBE_LOCATION=17485
	I1024 19:01:00.780045   16512 notify.go:220] Checking for updates...
	I1024 19:01:00.784651   16512 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1024 19:01:00.786103   16512 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17485-9104/kubeconfig
	I1024 19:01:00.787479   16512 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17485-9104/.minikube
	I1024 19:01:00.788783   16512 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1024 19:01:00.791173   16512 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1024 19:01:00.791615   16512 config.go:182] Loaded profile config "download-only-055097": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	W1024 19:01:00.791655   16512 start.go:810] api.Load failed for download-only-055097: filestore "download-only-055097": Docker machine "download-only-055097" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1024 19:01:00.791732   16512 driver.go:378] Setting default libvirt URI to qemu:///system
	W1024 19:01:00.791762   16512 start.go:810] api.Load failed for download-only-055097: filestore "download-only-055097": Docker machine "download-only-055097" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1024 19:01:00.825019   16512 out.go:97] Using the kvm2 driver based on existing profile
	I1024 19:01:00.825043   16512 start.go:298] selected driver: kvm2
	I1024 19:01:00.825048   16512 start.go:902] validating driver "kvm2" against &{Name:download-only-055097 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.16.0 ClusterName:download-only-055097 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1024 19:01:00.825395   16512 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1024 19:01:00.825468   16512 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17485-9104/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1024 19:01:00.839592   16512 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I1024 19:01:00.840271   16512 cni.go:84] Creating CNI manager for ""
	I1024 19:01:00.840293   16512 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1024 19:01:00.840307   16512 start_flags.go:323] config:
	{Name:download-only-055097 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:download-only-055097 Namespace:defa
ult APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: Sock
etVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1024 19:01:00.840457   16512 iso.go:125] acquiring lock: {Name:mkf528b771f12bbaddd502db30db0ccdeec4a711 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1024 19:01:00.842416   16512 out.go:97] Starting control plane node download-only-055097 in cluster download-only-055097
	I1024 19:01:00.842429   16512 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
	I1024 19:01:00.867837   16512 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.3/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4
	I1024 19:01:00.867875   16512 cache.go:57] Caching tarball of preloaded images
	I1024 19:01:00.868012   16512 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
	I1024 19:01:00.869885   16512 out.go:97] Downloading Kubernetes v1.28.3 preload ...
	I1024 19:01:00.869896   16512 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4 ...
	I1024 19:01:00.895888   16512 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.3/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4?checksum=md5:82104bbf889ff8b69d5c141ce86c05ac -> /home/jenkins/minikube-integration/17485-9104/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-055097"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.3/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:190: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:202: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-055097
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.58s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:307: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-936173 --alsologtostderr --binary-mirror http://127.0.0.1:42393 --driver=kvm2 
helpers_test.go:175: Cleaning up "binary-mirror-936173" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-936173
--- PASS: TestBinaryMirror (0.58s)

                                                
                                    
x
+
TestOffline (101.4s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-docker-713006 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2 
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-docker-713006 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2 : (1m40.216380595s)
helpers_test.go:175: Cleaning up "offline-docker-713006" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-docker-713006
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-docker-713006: (1.185570158s)
--- PASS: TestOffline (101.40s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:927: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-903896
addons_test.go:927: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-903896: exit status 85 (65.832347ms)

                                                
                                                
-- stdout --
	* Profile "addons-903896" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-903896"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:938: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-903896
addons_test.go:938: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-903896: exit status 85 (65.229334ms)

                                                
                                                
-- stdout --
	* Profile "addons-903896" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-903896"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (154.72s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-amd64 start -p addons-903896 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --driver=kvm2  --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-linux-amd64 start -p addons-903896 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --driver=kvm2  --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m34.721408427s)
--- PASS: TestAddons/Setup (154.72s)

                                                
                                    
x
+
TestAddons/parallel/Registry (17.97s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:329: registry stabilized in 20.598969ms
addons_test.go:331: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-xfc6g" [7a45a877-3e0a-4d0f-95f9-b646d5e53172] Running
addons_test.go:331: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.018570651s
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-v4q8q" [bd478214-cf0f-433e-9c79-035656042925] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.01784081s
addons_test.go:339: (dbg) Run:  kubectl --context addons-903896 delete po -l run=registry-test --now
addons_test.go:344: (dbg) Run:  kubectl --context addons-903896 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:344: (dbg) Done: kubectl --context addons-903896 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (6.874653914s)
addons_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p addons-903896 ip
2023/10/24 19:04:01 [DEBUG] GET http://192.168.39.238:5000
addons_test.go:387: (dbg) Run:  out/minikube-linux-amd64 -p addons-903896 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (17.97s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (21.7s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:206: (dbg) Run:  kubectl --context addons-903896 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:231: (dbg) Run:  kubectl --context addons-903896 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:244: (dbg) Run:  kubectl --context addons-903896 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:249: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [b92aff2d-ee3a-46cc-a304-ce705c11454b] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [b92aff2d-ee3a-46cc-a304-ce705c11454b] Running
addons_test.go:249: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.012230924s
addons_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p addons-903896 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:285: (dbg) Run:  kubectl --context addons-903896 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p addons-903896 ip
addons_test.go:296: (dbg) Run:  nslookup hello-john.test 192.168.39.238
addons_test.go:305: (dbg) Run:  out/minikube-linux-amd64 -p addons-903896 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:305: (dbg) Done: out/minikube-linux-amd64 -p addons-903896 addons disable ingress-dns --alsologtostderr -v=1: (1.879747971s)
addons_test.go:310: (dbg) Run:  out/minikube-linux-amd64 -p addons-903896 addons disable ingress --alsologtostderr -v=1
addons_test.go:310: (dbg) Done: out/minikube-linux-amd64 -p addons-903896 addons disable ingress --alsologtostderr -v=1: (7.879284982s)
--- PASS: TestAddons/parallel/Ingress (21.70s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.78s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:837: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-cwcdh" [2a8363cf-c74c-4504-aef1-073c3a2bccc9] Running
addons_test.go:837: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.013991587s
addons_test.go:840: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-903896
addons_test.go:840: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-903896: (5.76389717s)
--- PASS: TestAddons/parallel/InspektorGadget (10.78s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.72s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:406: metrics-server stabilized in 20.745782ms
addons_test.go:408: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-n9qmr" [5de16506-6889-441d-8052-af8f7b3ef564] Running
addons_test.go:408: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.027031627s
addons_test.go:414: (dbg) Run:  kubectl --context addons-903896 top pods -n kube-system
addons_test.go:431: (dbg) Run:  out/minikube-linux-amd64 -p addons-903896 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.72s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (11.42s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:455: tiller-deploy stabilized in 4.10902ms
addons_test.go:457: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-c7cgl" [3415d4b0-6e93-4eb3-9113-327181f131f9] Running
addons_test.go:457: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.03034144s
addons_test.go:472: (dbg) Run:  kubectl --context addons-903896 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:472: (dbg) Done: kubectl --context addons-903896 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (5.80192973s)
addons_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p addons-903896 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (11.42s)

                                                
                                    
x
+
TestAddons/parallel/CSI (94.06s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:560: csi-hostpath-driver pods stabilized in 23.855867ms
addons_test.go:563: (dbg) Run:  kubectl --context addons-903896 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:568: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-903896 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-903896 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-903896 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-903896 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-903896 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-903896 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-903896 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-903896 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-903896 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-903896 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-903896 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-903896 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-903896 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-903896 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-903896 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-903896 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-903896 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-903896 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-903896 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-903896 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-903896 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-903896 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-903896 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-903896 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-903896 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-903896 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-903896 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-903896 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-903896 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-903896 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-903896 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-903896 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-903896 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-903896 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-903896 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-903896 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-903896 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-903896 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-903896 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-903896 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-903896 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-903896 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-903896 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-903896 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-903896 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-903896 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-903896 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-903896 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-903896 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-903896 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-903896 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:573: (dbg) Run:  kubectl --context addons-903896 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:578: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [0f715abb-0717-43a5-b352-b0a4561cfe02] Pending
helpers_test.go:344: "task-pv-pod" [0f715abb-0717-43a5-b352-b0a4561cfe02] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [0f715abb-0717-43a5-b352-b0a4561cfe02] Running
addons_test.go:578: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 15.019235827s
addons_test.go:583: (dbg) Run:  kubectl --context addons-903896 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:588: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-903896 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-903896 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-903896 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:593: (dbg) Run:  kubectl --context addons-903896 delete pod task-pv-pod
addons_test.go:593: (dbg) Done: kubectl --context addons-903896 delete pod task-pv-pod: (1.181345319s)
addons_test.go:599: (dbg) Run:  kubectl --context addons-903896 delete pvc hpvc
addons_test.go:605: (dbg) Run:  kubectl --context addons-903896 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:610: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-903896 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-903896 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-903896 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-903896 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-903896 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-903896 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-903896 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-903896 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-903896 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-903896 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-903896 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-903896 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:615: (dbg) Run:  kubectl --context addons-903896 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:620: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [fa275e44-8e1b-4204-ac70-2678f3681bb0] Pending
helpers_test.go:344: "task-pv-pod-restore" [fa275e44-8e1b-4204-ac70-2678f3681bb0] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [fa275e44-8e1b-4204-ac70-2678f3681bb0] Running
addons_test.go:620: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.023623872s
addons_test.go:625: (dbg) Run:  kubectl --context addons-903896 delete pod task-pv-pod-restore
addons_test.go:625: (dbg) Done: kubectl --context addons-903896 delete pod task-pv-pod-restore: (1.015649464s)
addons_test.go:629: (dbg) Run:  kubectl --context addons-903896 delete pvc hpvc-restore
addons_test.go:633: (dbg) Run:  kubectl --context addons-903896 delete volumesnapshot new-snapshot-demo
addons_test.go:637: (dbg) Run:  out/minikube-linux-amd64 -p addons-903896 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:637: (dbg) Done: out/minikube-linux-amd64 -p addons-903896 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.692948782s)
addons_test.go:641: (dbg) Run:  out/minikube-linux-amd64 -p addons-903896 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (94.06s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (16.65s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:823: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-903896 --alsologtostderr -v=1
addons_test.go:823: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-903896 --alsologtostderr -v=1: (2.608843875s)
addons_test.go:828: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-94b766c-p87pf" [8b1f11f0-8147-481f-a4f4-64571aaaa6ab] Pending
helpers_test.go:344: "headlamp-94b766c-p87pf" [8b1f11f0-8147-481f-a4f4-64571aaaa6ab] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-94b766c-p87pf" [8b1f11f0-8147-481f-a4f4-64571aaaa6ab] Running
addons_test.go:828: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 14.040228317s
--- PASS: TestAddons/parallel/Headlamp (16.65s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.79s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:856: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-56665cdfc-47kjg" [592d7ce4-45bc-4f6a-aee9-003a129a17d9] Running
addons_test.go:856: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.009775455s
addons_test.go:859: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-903896
--- PASS: TestAddons/parallel/CloudSpanner (5.79s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (55.5s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:872: (dbg) Run:  kubectl --context addons-903896 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:878: (dbg) Run:  kubectl --context addons-903896 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:882: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-903896 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-903896 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-903896 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-903896 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-903896 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-903896 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-903896 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:885: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [6d25aa00-38eb-40c9-bb67-8a703186314d] Pending
helpers_test.go:344: "test-local-path" [6d25aa00-38eb-40c9-bb67-8a703186314d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [6d25aa00-38eb-40c9-bb67-8a703186314d] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [6d25aa00-38eb-40c9-bb67-8a703186314d] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:885: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.012166586s
addons_test.go:890: (dbg) Run:  kubectl --context addons-903896 get pvc test-pvc -o=json
addons_test.go:899: (dbg) Run:  out/minikube-linux-amd64 -p addons-903896 ssh "cat /opt/local-path-provisioner/pvc-6afcfa23-c1c0-4021-90f6-a5fff9002043_default_test-pvc/file1"
addons_test.go:911: (dbg) Run:  kubectl --context addons-903896 delete pod test-local-path
addons_test.go:915: (dbg) Run:  kubectl --context addons-903896 delete pvc test-pvc
addons_test.go:919: (dbg) Run:  out/minikube-linux-amd64 -p addons-903896 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:919: (dbg) Done: out/minikube-linux-amd64 -p addons-903896 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.888352389s)
--- PASS: TestAddons/parallel/LocalPath (55.50s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.49s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:951: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-qmrw7" [4d955ead-250c-46a3-a790-a823a3e9cda9] Running
addons_test.go:951: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.014272128s
addons_test.go:954: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-903896
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.49s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:649: (dbg) Run:  kubectl --context addons-903896 create ns new-namespace
addons_test.go:663: (dbg) Run:  kubectl --context addons-903896 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (13.4s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:171: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-903896
addons_test.go:171: (dbg) Done: out/minikube-linux-amd64 stop -p addons-903896: (13.103270831s)
addons_test.go:175: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-903896
addons_test.go:179: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-903896
addons_test.go:184: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-903896
--- PASS: TestAddons/StoppedEnableDisable (13.40s)

                                                
                                    
x
+
TestCertOptions (85.11s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-901196 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2 
E1024 19:42:26.656016   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/skaffold-807036/client.crt: no such file or directory
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-901196 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2 : (1m23.355630733s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-901196 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-901196 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-901196 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-901196" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-901196
E1024 19:43:44.561942   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/addons-903896/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-901196: (1.265229705s)
--- PASS: TestCertOptions (85.11s)

                                                
                                    
x
+
TestCertExpiration (378.13s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-159493 --memory=2048 --cert-expiration=3m --driver=kvm2 
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-159493 --memory=2048 --cert-expiration=3m --driver=kvm2 : (1m53.333184501s)
E1024 19:41:45.695210   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/skaffold-807036/client.crt: no such file or directory
E1024 19:41:47.610791   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/addons-903896/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-159493 --memory=2048 --cert-expiration=8760h --driver=kvm2 
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-159493 --memory=2048 --cert-expiration=8760h --driver=kvm2 : (1m23.695274465s)
helpers_test.go:175: Cleaning up "cert-expiration-159493" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-159493
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-159493: (1.09739855s)
--- PASS: TestCertExpiration (378.13s)

                                                
                                    
x
+
TestDockerFlags (87.54s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-linux-amd64 start -p docker-flags-828774 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=kvm2 
docker_test.go:51: (dbg) Done: out/minikube-linux-amd64 start -p docker-flags-828774 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=kvm2 : (1m26.00259879s)
docker_test.go:56: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-828774 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-828774 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-828774" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-flags-828774
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-flags-828774: (1.083604753s)
--- PASS: TestDockerFlags (87.54s)

                                                
                                    
x
+
TestForceSystemdFlag (55.43s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-960033 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2 
E1024 19:38:01.370418   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/ingress-addon-legacy-855804/client.crt: no such file or directory
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-960033 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2 : (53.814545267s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-960033 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-960033" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-960033
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-960033: (1.331538949s)
--- PASS: TestForceSystemdFlag (55.43s)

                                                
                                    
x
+
TestForceSystemdEnv (77.71s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-599950 --memory=2048 --alsologtostderr -v=5 --driver=kvm2 
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-599950 --memory=2048 --alsologtostderr -v=5 --driver=kvm2 : (1m16.208679617s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-599950 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-599950" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-599950
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-599950: (1.211276136s)
--- PASS: TestForceSystemdEnv (77.71s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (3.53s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (3.53s)

                                                
                                    
x
+
TestErrorSpam/setup (51.08s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-814803 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-814803 --driver=kvm2 
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-814803 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-814803 --driver=kvm2 : (51.078224834s)
--- PASS: TestErrorSpam/setup (51.08s)

                                                
                                    
x
+
TestErrorSpam/start (0.38s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-814803 --log_dir /tmp/nospam-814803 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-814803 --log_dir /tmp/nospam-814803 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-814803 --log_dir /tmp/nospam-814803 start --dry-run
--- PASS: TestErrorSpam/start (0.38s)

                                                
                                    
x
+
TestErrorSpam/status (0.85s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-814803 --log_dir /tmp/nospam-814803 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-814803 --log_dir /tmp/nospam-814803 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-814803 --log_dir /tmp/nospam-814803 status
--- PASS: TestErrorSpam/status (0.85s)

                                                
                                    
x
+
TestErrorSpam/pause (1.24s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-814803 --log_dir /tmp/nospam-814803 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-814803 --log_dir /tmp/nospam-814803 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-814803 --log_dir /tmp/nospam-814803 pause
--- PASS: TestErrorSpam/pause (1.24s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.39s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-814803 --log_dir /tmp/nospam-814803 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-814803 --log_dir /tmp/nospam-814803 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-814803 --log_dir /tmp/nospam-814803 unpause
--- PASS: TestErrorSpam/unpause (1.39s)

                                                
                                    
x
+
TestErrorSpam/stop (4.26s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-814803 --log_dir /tmp/nospam-814803 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-814803 --log_dir /tmp/nospam-814803 stop: (4.095991788s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-814803 --log_dir /tmp/nospam-814803 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-814803 --log_dir /tmp/nospam-814803 stop
--- PASS: TestErrorSpam/stop (4.26s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/17485-9104/.minikube/files/etc/test/nested/copy/16443/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (65.21s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-280129 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2 
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-280129 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2 : (1m5.211796138s)
--- PASS: TestFunctional/serial/StartWithProxy (65.21s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (39.57s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-280129 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-280129 --alsologtostderr -v=8: (39.564473709s)
functional_test.go:659: soft start took 39.565020424s for "functional-280129" cluster.
--- PASS: TestFunctional/serial/SoftStart (39.57s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-280129 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.29s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-280129 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-280129 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-280129 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.29s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.39s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-280129 /tmp/TestFunctionalserialCacheCmdcacheadd_local2566978137/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-280129 cache add minikube-local-cache-test:functional-280129
functional_test.go:1085: (dbg) Done: out/minikube-linux-amd64 -p functional-280129 cache add minikube-local-cache-test:functional-280129: (1.061180731s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-280129 cache delete minikube-local-cache-test:functional-280129
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-280129
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.39s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.25s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-280129 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.25s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.27s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-280129 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-280129 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-280129 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (250.843109ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-280129 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-280129 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.27s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-280129 kubectl -- --context functional-280129 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-280129 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (38.88s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-280129 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1024 19:08:44.563494   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/addons-903896/client.crt: no such file or directory
E1024 19:08:44.569156   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/addons-903896/client.crt: no such file or directory
E1024 19:08:44.579406   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/addons-903896/client.crt: no such file or directory
E1024 19:08:44.599670   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/addons-903896/client.crt: no such file or directory
E1024 19:08:44.640021   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/addons-903896/client.crt: no such file or directory
E1024 19:08:44.720399   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/addons-903896/client.crt: no such file or directory
E1024 19:08:44.880825   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/addons-903896/client.crt: no such file or directory
E1024 19:08:45.201392   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/addons-903896/client.crt: no such file or directory
E1024 19:08:45.842441   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/addons-903896/client.crt: no such file or directory
E1024 19:08:47.122981   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/addons-903896/client.crt: no such file or directory
E1024 19:08:49.684774   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/addons-903896/client.crt: no such file or directory
E1024 19:08:54.805614   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/addons-903896/client.crt: no such file or directory
E1024 19:09:05.046068   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/addons-903896/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-280129 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (38.875675981s)
functional_test.go:757: restart took 38.875791356s for "functional-280129" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (38.88s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-280129 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.16s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-280129 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-280129 logs: (1.157145584s)
--- PASS: TestFunctional/serial/LogsCmd (1.16s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.1s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-280129 logs --file /tmp/TestFunctionalserialLogsFileCmd1595265544/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-280129 logs --file /tmp/TestFunctionalserialLogsFileCmd1595265544/001/logs.txt: (1.095871966s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.10s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.26s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-280129 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-280129
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-280129: exit status 115 (292.869365ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.50.11:31731 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-280129 delete -f testdata/invalidsvc.yaml
E1024 19:09:25.526700   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/addons-903896/client.crt: no such file or directory
--- PASS: TestFunctional/serial/InvalidService (4.26s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-280129 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-280129 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-280129 config get cpus: exit status 14 (67.176496ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-280129 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-280129 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-280129 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-280129 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-280129 config get cpus: exit status 14 (61.31334ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (16.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-280129 --alsologtostderr -v=1]
E1024 19:10:06.487424   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/addons-903896/client.crt: no such file or directory
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-280129 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 23533: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (16.76s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-280129 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-280129 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 : exit status 23 (145.407491ms)

                                                
                                                
-- stdout --
	* [functional-280129] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17485
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17485-9104/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17485-9104/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1024 19:10:05.979687   23429 out.go:296] Setting OutFile to fd 1 ...
	I1024 19:10:05.979923   23429 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 19:10:05.979931   23429 out.go:309] Setting ErrFile to fd 2...
	I1024 19:10:05.979936   23429 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 19:10:05.980080   23429 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17485-9104/.minikube/bin
	I1024 19:10:05.980646   23429 out.go:303] Setting JSON to false
	I1024 19:10:05.981653   23429 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":2904,"bootTime":1698171702,"procs":217,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1024 19:10:05.981710   23429 start.go:138] virtualization: kvm guest
	I1024 19:10:05.984056   23429 out.go:177] * [functional-280129] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I1024 19:10:05.985580   23429 notify.go:220] Checking for updates...
	I1024 19:10:05.985590   23429 out.go:177]   - MINIKUBE_LOCATION=17485
	I1024 19:10:05.987199   23429 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1024 19:10:05.988993   23429 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17485-9104/kubeconfig
	I1024 19:10:05.990533   23429 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17485-9104/.minikube
	I1024 19:10:05.991975   23429 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1024 19:10:05.993426   23429 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1024 19:10:05.995202   23429 config.go:182] Loaded profile config "functional-280129": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1024 19:10:05.995588   23429 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1024 19:10:05.995627   23429 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:10:06.010596   23429 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35117
	I1024 19:10:06.010978   23429 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:10:06.011575   23429 main.go:141] libmachine: Using API Version  1
	I1024 19:10:06.011598   23429 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:10:06.012011   23429 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:10:06.012200   23429 main.go:141] libmachine: (functional-280129) Calling .DriverName
	I1024 19:10:06.012489   23429 driver.go:378] Setting default libvirt URI to qemu:///system
	I1024 19:10:06.012809   23429 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1024 19:10:06.012851   23429 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:10:06.026828   23429 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43461
	I1024 19:10:06.027215   23429 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:10:06.027668   23429 main.go:141] libmachine: Using API Version  1
	I1024 19:10:06.027693   23429 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:10:06.028019   23429 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:10:06.028187   23429 main.go:141] libmachine: (functional-280129) Calling .DriverName
	I1024 19:10:06.059150   23429 out.go:177] * Using the kvm2 driver based on existing profile
	I1024 19:10:06.060547   23429 start.go:298] selected driver: kvm2
	I1024 19:10:06.060555   23429 start.go:902] validating driver "kvm2" against &{Name:functional-280129 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.3 ClusterName:functional-280129 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.50.11 Port:8441 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Ce
rtExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1024 19:10:06.060638   23429 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1024 19:10:06.062883   23429 out.go:177] 
	W1024 19:10:06.064360   23429 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1024 19:10:06.065596   23429 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-280129 --dry-run --alsologtostderr -v=1 --driver=kvm2 
--- PASS: TestFunctional/parallel/DryRun (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-280129 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-280129 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 : exit status 23 (157.214645ms)

                                                
                                                
-- stdout --
	* [functional-280129] minikube v1.31.2 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17485
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17485-9104/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17485-9104/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1024 19:10:01.848821   23128 out.go:296] Setting OutFile to fd 1 ...
	I1024 19:10:01.848958   23128 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 19:10:01.848970   23128 out.go:309] Setting ErrFile to fd 2...
	I1024 19:10:01.848980   23128 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 19:10:01.849350   23128 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17485-9104/.minikube/bin
	I1024 19:10:01.850070   23128 out.go:303] Setting JSON to false
	I1024 19:10:01.851253   23128 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":2900,"bootTime":1698171702,"procs":191,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1024 19:10:01.851334   23128 start.go:138] virtualization: kvm guest
	I1024 19:10:01.853709   23128 out.go:177] * [functional-280129] minikube v1.31.2 sur Ubuntu 20.04 (kvm/amd64)
	I1024 19:10:01.855221   23128 out.go:177]   - MINIKUBE_LOCATION=17485
	I1024 19:10:01.855285   23128 notify.go:220] Checking for updates...
	I1024 19:10:01.856714   23128 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1024 19:10:01.858191   23128 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17485-9104/kubeconfig
	I1024 19:10:01.859679   23128 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17485-9104/.minikube
	I1024 19:10:01.861248   23128 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1024 19:10:01.862685   23128 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1024 19:10:01.864548   23128 config.go:182] Loaded profile config "functional-280129": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1024 19:10:01.864981   23128 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1024 19:10:01.865024   23128 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:10:01.878969   23128 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44875
	I1024 19:10:01.879364   23128 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:10:01.879885   23128 main.go:141] libmachine: Using API Version  1
	I1024 19:10:01.879928   23128 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:10:01.880314   23128 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:10:01.880493   23128 main.go:141] libmachine: (functional-280129) Calling .DriverName
	I1024 19:10:01.880732   23128 driver.go:378] Setting default libvirt URI to qemu:///system
	I1024 19:10:01.881011   23128 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1024 19:10:01.881049   23128 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:10:01.896544   23128 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35223
	I1024 19:10:01.897039   23128 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:10:01.897632   23128 main.go:141] libmachine: Using API Version  1
	I1024 19:10:01.897660   23128 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:10:01.897953   23128 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:10:01.898150   23128 main.go:141] libmachine: (functional-280129) Calling .DriverName
	I1024 19:10:01.935438   23128 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I1024 19:10:01.936934   23128 start.go:298] selected driver: kvm2
	I1024 19:10:01.936956   23128 start.go:902] validating driver "kvm2" against &{Name:functional-280129 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.3 ClusterName:functional-280129 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.50.11 Port:8441 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Ce
rtExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1024 19:10:01.937089   23128 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1024 19:10:01.939483   23128 out.go:177] 
	W1024 19:10:01.941126   23128 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1024 19:10:01.942628   23128 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-280129 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-280129 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-280129 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.88s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (32.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1628: (dbg) Run:  kubectl --context functional-280129 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-280129 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-6h26h" [26264818-c8a8-4b57-934c-aba83a153f2c] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-6h26h" [26264818-c8a8-4b57-934c-aba83a153f2c] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 32.013122s
functional_test.go:1648: (dbg) Run:  out/minikube-linux-amd64 -p functional-280129 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.50.11:32132
functional_test.go:1674: http://192.168.50.11:32132: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-55497b8b78-6h26h

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.50.11:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.50.11:32132
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (32.54s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-linux-amd64 -p functional-280129 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-linux-amd64 -p functional-280129 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (59.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [a7c99eb2-1b2a-46c0-b945-d74325d79c9d] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.156708663s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-280129 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-280129 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-280129 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-280129 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-280129 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [b330d8d6-0ca5-4ca4-a73a-867ea66cea4e] Pending
helpers_test.go:344: "sp-pod" [b330d8d6-0ca5-4ca4-a73a-867ea66cea4e] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [b330d8d6-0ca5-4ca4-a73a-867ea66cea4e] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 34.026019169s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-280129 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-280129 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-280129 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [0da5cc1c-cf9c-4cb6-92e2-dcca57314523] Pending
helpers_test.go:344: "sp-pod" [0da5cc1c-cf9c-4cb6-92e2-dcca57314523] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [0da5cc1c-cf9c-4cb6-92e2-dcca57314523] Running
2023/10/24 19:10:22 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 17.02895201s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-280129 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (59.93s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-linux-amd64 -p functional-280129 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-linux-amd64 -p functional-280129 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-280129 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-280129 ssh -n functional-280129 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-280129 cp functional-280129:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1835208565/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-280129 ssh -n functional-280129 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.06s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (38.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-280129 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-hwltq" [9fd840f4-a737-4292-8068-9cef456aa5d5] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-hwltq" [9fd840f4-a737-4292-8068-9cef456aa5d5] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 31.037780898s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-280129 exec mysql-859648c796-hwltq -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-280129 exec mysql-859648c796-hwltq -- mysql -ppassword -e "show databases;": exit status 1 (276.836706ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-280129 exec mysql-859648c796-hwltq -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-280129 exec mysql-859648c796-hwltq -- mysql -ppassword -e "show databases;": exit status 1 (318.781941ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-280129 exec mysql-859648c796-hwltq -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-280129 exec mysql-859648c796-hwltq -- mysql -ppassword -e "show databases;": exit status 1 (223.971238ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-280129 exec mysql-859648c796-hwltq -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (38.50s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/16443/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-280129 ssh "sudo cat /etc/test/nested/copy/16443/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/16443.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-280129 ssh "sudo cat /etc/ssl/certs/16443.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/16443.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-280129 ssh "sudo cat /usr/share/ca-certificates/16443.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-280129 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/164432.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-280129 ssh "sudo cat /etc/ssl/certs/164432.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/164432.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-280129 ssh "sudo cat /usr/share/ca-certificates/164432.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-280129 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.69s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-280129 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-280129 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-280129 ssh "sudo systemctl is-active crio": exit status 1 (249.113087ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-280129 version --short
--- PASS: TestFunctional/parallel/Version/short (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-280129 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.84s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-280129 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-280129 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.3
registry.k8s.io/kube-proxy:v1.28.3
registry.k8s.io/kube-controller-manager:v1.28.3
registry.k8s.io/kube-apiserver:v1.28.3
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-280129
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-280129
docker.io/kubernetesui/metrics-scraper:<none>
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-280129 image ls --format short --alsologtostderr:
I1024 19:10:14.771808   24293 out.go:296] Setting OutFile to fd 1 ...
I1024 19:10:14.771918   24293 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1024 19:10:14.771926   24293 out.go:309] Setting ErrFile to fd 2...
I1024 19:10:14.771931   24293 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1024 19:10:14.772102   24293 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17485-9104/.minikube/bin
I1024 19:10:14.772609   24293 config.go:182] Loaded profile config "functional-280129": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
I1024 19:10:14.772704   24293 config.go:182] Loaded profile config "functional-280129": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
I1024 19:10:14.773059   24293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1024 19:10:14.773101   24293 main.go:141] libmachine: Launching plugin server for driver kvm2
I1024 19:10:14.787288   24293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38521
I1024 19:10:14.787674   24293 main.go:141] libmachine: () Calling .GetVersion
I1024 19:10:14.788229   24293 main.go:141] libmachine: Using API Version  1
I1024 19:10:14.788250   24293 main.go:141] libmachine: () Calling .SetConfigRaw
I1024 19:10:14.788559   24293 main.go:141] libmachine: () Calling .GetMachineName
I1024 19:10:14.788724   24293 main.go:141] libmachine: (functional-280129) Calling .GetState
I1024 19:10:14.791712   24293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1024 19:10:14.791755   24293 main.go:141] libmachine: Launching plugin server for driver kvm2
I1024 19:10:14.805828   24293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37555
I1024 19:10:14.806210   24293 main.go:141] libmachine: () Calling .GetVersion
I1024 19:10:14.806689   24293 main.go:141] libmachine: Using API Version  1
I1024 19:10:14.806714   24293 main.go:141] libmachine: () Calling .SetConfigRaw
I1024 19:10:14.807029   24293 main.go:141] libmachine: () Calling .GetMachineName
I1024 19:10:14.807188   24293 main.go:141] libmachine: (functional-280129) Calling .DriverName
I1024 19:10:14.807397   24293 ssh_runner.go:195] Run: systemctl --version
I1024 19:10:14.807424   24293 main.go:141] libmachine: (functional-280129) Calling .GetSSHHostname
I1024 19:10:14.810272   24293 main.go:141] libmachine: (functional-280129) DBG | domain functional-280129 has defined MAC address 52:54:00:47:d1:64 in network mk-functional-280129
I1024 19:10:14.810924   24293 main.go:141] libmachine: (functional-280129) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:d1:64", ip: ""} in network mk-functional-280129: {Iface:virbr1 ExpiryTime:2023-10-24 20:07:05 +0000 UTC Type:0 Mac:52:54:00:47:d1:64 Iaid: IPaddr:192.168.50.11 Prefix:24 Hostname:functional-280129 Clientid:01:52:54:00:47:d1:64}
I1024 19:10:14.810966   24293 main.go:141] libmachine: (functional-280129) DBG | domain functional-280129 has defined IP address 192.168.50.11 and MAC address 52:54:00:47:d1:64 in network mk-functional-280129
I1024 19:10:14.811162   24293 main.go:141] libmachine: (functional-280129) Calling .GetSSHPort
I1024 19:10:14.811316   24293 main.go:141] libmachine: (functional-280129) Calling .GetSSHKeyPath
I1024 19:10:14.811454   24293 main.go:141] libmachine: (functional-280129) Calling .GetSSHUsername
I1024 19:10:14.811582   24293 sshutil.go:53] new ssh client: &{IP:192.168.50.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9104/.minikube/machines/functional-280129/id_rsa Username:docker}
I1024 19:10:14.904639   24293 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I1024 19:10:14.939977   24293 main.go:141] libmachine: Making call to close driver server
I1024 19:10:14.939995   24293 main.go:141] libmachine: (functional-280129) Calling .Close
I1024 19:10:14.940266   24293 main.go:141] libmachine: (functional-280129) DBG | Closing plugin on server side
I1024 19:10:14.940319   24293 main.go:141] libmachine: Successfully made call to close driver server
I1024 19:10:14.940328   24293 main.go:141] libmachine: Making call to close connection to plugin binary
I1024 19:10:14.940349   24293 main.go:141] libmachine: Making call to close driver server
I1024 19:10:14.940358   24293 main.go:141] libmachine: (functional-280129) Calling .Close
I1024 19:10:14.940699   24293 main.go:141] libmachine: (functional-280129) DBG | Closing plugin on server side
I1024 19:10:14.940700   24293 main.go:141] libmachine: Successfully made call to close driver server
I1024 19:10:14.940727   24293 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-280129 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-280129 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| docker.io/library/mysql                     | 5.7               | 3b85be0b10d38 | 581MB  |
| registry.k8s.io/pause                       | 3.9               | e6f1816883972 | 744kB  |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| registry.k8s.io/kube-apiserver              | v1.28.3           | 5374347291230 | 126MB  |
| registry.k8s.io/kube-scheduler              | v1.28.3           | 6d1b4fd1b182d | 60.1MB |
| registry.k8s.io/kube-controller-manager     | v1.28.3           | 10baa1ca17068 | 122MB  |
| gcr.io/google-containers/addon-resizer      | functional-280129 | ffd4cfbbe753e | 32.9MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| docker.io/library/minikube-local-cache-test | functional-280129 | b77c779c411c4 | 30B    |
| registry.k8s.io/coredns/coredns             | v1.10.1           | ead0a4a53df89 | 53.6MB |
| docker.io/kubernetesui/metrics-scraper      | <none>            | 115053965e86b | 43.8MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| registry.k8s.io/etcd                        | 3.5.9-0           | 73deb9a3f7025 | 294MB  |
| docker.io/library/nginx                     | latest            | bc649bab30d15 | 187MB  |
| registry.k8s.io/kube-proxy                  | v1.28.3           | bfc896cf80fba | 73.1MB |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-280129 image ls --format table --alsologtostderr:
I1024 19:10:15.311731   24410 out.go:296] Setting OutFile to fd 1 ...
I1024 19:10:15.311834   24410 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1024 19:10:15.311846   24410 out.go:309] Setting ErrFile to fd 2...
I1024 19:10:15.311853   24410 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1024 19:10:15.312156   24410 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17485-9104/.minikube/bin
I1024 19:10:15.312964   24410 config.go:182] Loaded profile config "functional-280129": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
I1024 19:10:15.313091   24410 config.go:182] Loaded profile config "functional-280129": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
I1024 19:10:15.313536   24410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1024 19:10:15.313606   24410 main.go:141] libmachine: Launching plugin server for driver kvm2
I1024 19:10:15.327717   24410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41871
I1024 19:10:15.328115   24410 main.go:141] libmachine: () Calling .GetVersion
I1024 19:10:15.328705   24410 main.go:141] libmachine: Using API Version  1
I1024 19:10:15.328730   24410 main.go:141] libmachine: () Calling .SetConfigRaw
I1024 19:10:15.329050   24410 main.go:141] libmachine: () Calling .GetMachineName
I1024 19:10:15.329216   24410 main.go:141] libmachine: (functional-280129) Calling .GetState
I1024 19:10:15.331007   24410 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1024 19:10:15.331047   24410 main.go:141] libmachine: Launching plugin server for driver kvm2
I1024 19:10:15.344642   24410 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40121
I1024 19:10:15.344996   24410 main.go:141] libmachine: () Calling .GetVersion
I1024 19:10:15.345419   24410 main.go:141] libmachine: Using API Version  1
I1024 19:10:15.345444   24410 main.go:141] libmachine: () Calling .SetConfigRaw
I1024 19:10:15.345762   24410 main.go:141] libmachine: () Calling .GetMachineName
I1024 19:10:15.345951   24410 main.go:141] libmachine: (functional-280129) Calling .DriverName
I1024 19:10:15.346153   24410 ssh_runner.go:195] Run: systemctl --version
I1024 19:10:15.346184   24410 main.go:141] libmachine: (functional-280129) Calling .GetSSHHostname
I1024 19:10:15.348875   24410 main.go:141] libmachine: (functional-280129) DBG | domain functional-280129 has defined MAC address 52:54:00:47:d1:64 in network mk-functional-280129
I1024 19:10:15.349279   24410 main.go:141] libmachine: (functional-280129) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:d1:64", ip: ""} in network mk-functional-280129: {Iface:virbr1 ExpiryTime:2023-10-24 20:07:05 +0000 UTC Type:0 Mac:52:54:00:47:d1:64 Iaid: IPaddr:192.168.50.11 Prefix:24 Hostname:functional-280129 Clientid:01:52:54:00:47:d1:64}
I1024 19:10:15.349319   24410 main.go:141] libmachine: (functional-280129) DBG | domain functional-280129 has defined IP address 192.168.50.11 and MAC address 52:54:00:47:d1:64 in network mk-functional-280129
I1024 19:10:15.349430   24410 main.go:141] libmachine: (functional-280129) Calling .GetSSHPort
I1024 19:10:15.349595   24410 main.go:141] libmachine: (functional-280129) Calling .GetSSHKeyPath
I1024 19:10:15.349729   24410 main.go:141] libmachine: (functional-280129) Calling .GetSSHUsername
I1024 19:10:15.349858   24410 sshutil.go:53] new ssh client: &{IP:192.168.50.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9104/.minikube/machines/functional-280129/id_rsa Username:docker}
I1024 19:10:15.451352   24410 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I1024 19:10:15.547169   24410 main.go:141] libmachine: Making call to close driver server
I1024 19:10:15.547186   24410 main.go:141] libmachine: (functional-280129) Calling .Close
I1024 19:10:15.547430   24410 main.go:141] libmachine: Successfully made call to close driver server
I1024 19:10:15.547456   24410 main.go:141] libmachine: Making call to close connection to plugin binary
I1024 19:10:15.547471   24410 main.go:141] libmachine: Making call to close driver server
I1024 19:10:15.547474   24410 main.go:141] libmachine: (functional-280129) DBG | Closing plugin on server side
I1024 19:10:15.547479   24410 main.go:141] libmachine: (functional-280129) Calling .Close
I1024 19:10:15.547748   24410 main.go:141] libmachine: Successfully made call to close driver server
I1024 19:10:15.547765   24410 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-280129 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-280129 image ls --format json --alsologtostderr:
[{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"b77c779c411c443c8accd6aa5da8f1e71fd9083fffe045d42a9e554c8a41cffc","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-280129"],"size":"30"},{"id":"6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.3"],"size":"60100000"},{"id":"bc649bab30d150c10a84031a7f54c99a8c31069c7bc324a7899d7125d59cc973","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"187000000"},{"id":"3b85be0b10d389e268b35d4c04075b95c295dd24d595e8c5261e43ab94c47de4","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"581000000"},{"id":"73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"294000000"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df5
9a261103315f7","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"43800000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-280129"],"size":"32900000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.3"],"size":"126000000"},{"id":"bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf","repoDigests":
[],"repoTags":["registry.k8s.io/kube-proxy:v1.28.3"],"size":"73100000"},{"id":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"53600000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.3"],"size":"122000000"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"744000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-280129 image ls --format json --alsologtostderr:
I1024 19:10:15.053197   24351 out.go:296] Setting OutFile to fd 1 ...
I1024 19:10:15.053329   24351 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1024 19:10:15.053340   24351 out.go:309] Setting ErrFile to fd 2...
I1024 19:10:15.053347   24351 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1024 19:10:15.053522   24351 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17485-9104/.minikube/bin
I1024 19:10:15.054078   24351 config.go:182] Loaded profile config "functional-280129": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
I1024 19:10:15.054193   24351 config.go:182] Loaded profile config "functional-280129": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
I1024 19:10:15.054573   24351 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1024 19:10:15.054629   24351 main.go:141] libmachine: Launching plugin server for driver kvm2
I1024 19:10:15.069782   24351 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36129
I1024 19:10:15.070223   24351 main.go:141] libmachine: () Calling .GetVersion
I1024 19:10:15.070730   24351 main.go:141] libmachine: Using API Version  1
I1024 19:10:15.070752   24351 main.go:141] libmachine: () Calling .SetConfigRaw
I1024 19:10:15.071120   24351 main.go:141] libmachine: () Calling .GetMachineName
I1024 19:10:15.071266   24351 main.go:141] libmachine: (functional-280129) Calling .GetState
I1024 19:10:15.075249   24351 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1024 19:10:15.075294   24351 main.go:141] libmachine: Launching plugin server for driver kvm2
I1024 19:10:15.089161   24351 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43939
I1024 19:10:15.089517   24351 main.go:141] libmachine: () Calling .GetVersion
I1024 19:10:15.089992   24351 main.go:141] libmachine: Using API Version  1
I1024 19:10:15.090060   24351 main.go:141] libmachine: () Calling .SetConfigRaw
I1024 19:10:15.090368   24351 main.go:141] libmachine: () Calling .GetMachineName
I1024 19:10:15.090537   24351 main.go:141] libmachine: (functional-280129) Calling .DriverName
I1024 19:10:15.090742   24351 ssh_runner.go:195] Run: systemctl --version
I1024 19:10:15.090772   24351 main.go:141] libmachine: (functional-280129) Calling .GetSSHHostname
I1024 19:10:15.093227   24351 main.go:141] libmachine: (functional-280129) DBG | domain functional-280129 has defined MAC address 52:54:00:47:d1:64 in network mk-functional-280129
I1024 19:10:15.093565   24351 main.go:141] libmachine: (functional-280129) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:d1:64", ip: ""} in network mk-functional-280129: {Iface:virbr1 ExpiryTime:2023-10-24 20:07:05 +0000 UTC Type:0 Mac:52:54:00:47:d1:64 Iaid: IPaddr:192.168.50.11 Prefix:24 Hostname:functional-280129 Clientid:01:52:54:00:47:d1:64}
I1024 19:10:15.093593   24351 main.go:141] libmachine: (functional-280129) DBG | domain functional-280129 has defined IP address 192.168.50.11 and MAC address 52:54:00:47:d1:64 in network mk-functional-280129
I1024 19:10:15.093722   24351 main.go:141] libmachine: (functional-280129) Calling .GetSSHPort
I1024 19:10:15.093868   24351 main.go:141] libmachine: (functional-280129) Calling .GetSSHKeyPath
I1024 19:10:15.094013   24351 main.go:141] libmachine: (functional-280129) Calling .GetSSHUsername
I1024 19:10:15.094171   24351 sshutil.go:53] new ssh client: &{IP:192.168.50.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9104/.minikube/machines/functional-280129/id_rsa Username:docker}
I1024 19:10:15.204255   24351 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I1024 19:10:15.244687   24351 main.go:141] libmachine: Making call to close driver server
I1024 19:10:15.244701   24351 main.go:141] libmachine: (functional-280129) Calling .Close
I1024 19:10:15.244999   24351 main.go:141] libmachine: Successfully made call to close driver server
I1024 19:10:15.245021   24351 main.go:141] libmachine: Making call to close connection to plugin binary
I1024 19:10:15.245046   24351 main.go:141] libmachine: (functional-280129) DBG | Closing plugin on server side
I1024 19:10:15.245050   24351 main.go:141] libmachine: Making call to close driver server
I1024 19:10:15.245071   24351 main.go:141] libmachine: (functional-280129) Calling .Close
I1024 19:10:15.245274   24351 main.go:141] libmachine: Successfully made call to close driver server
I1024 19:10:15.245288   24351 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-280129 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-280129 image ls --format yaml --alsologtostderr:
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: 3b85be0b10d389e268b35d4c04075b95c295dd24d595e8c5261e43ab94c47de4
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "581000000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.3
size: "60100000"
- id: ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "53600000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-280129
size: "32900000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.3
size: "126000000"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"
- id: bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.28.3
size: "73100000"
- id: bc649bab30d150c10a84031a7f54c99a8c31069c7bc324a7899d7125d59cc973
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "187000000"
- id: 73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "294000000"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "43800000"
- id: b77c779c411c443c8accd6aa5da8f1e71fd9083fffe045d42a9e554c8a41cffc
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-280129
size: "30"
- id: 10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.3
size: "122000000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-280129 image ls --format yaml --alsologtostderr:
I1024 19:10:14.791070   24301 out.go:296] Setting OutFile to fd 1 ...
I1024 19:10:14.791262   24301 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1024 19:10:14.791288   24301 out.go:309] Setting ErrFile to fd 2...
I1024 19:10:14.791303   24301 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1024 19:10:14.791569   24301 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17485-9104/.minikube/bin
I1024 19:10:14.792190   24301 config.go:182] Loaded profile config "functional-280129": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
I1024 19:10:14.792358   24301 config.go:182] Loaded profile config "functional-280129": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
I1024 19:10:14.792789   24301 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1024 19:10:14.792862   24301 main.go:141] libmachine: Launching plugin server for driver kvm2
I1024 19:10:14.806015   24301 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42181
I1024 19:10:14.806380   24301 main.go:141] libmachine: () Calling .GetVersion
I1024 19:10:14.806936   24301 main.go:141] libmachine: Using API Version  1
I1024 19:10:14.806959   24301 main.go:141] libmachine: () Calling .SetConfigRaw
I1024 19:10:14.807321   24301 main.go:141] libmachine: () Calling .GetMachineName
I1024 19:10:14.807508   24301 main.go:141] libmachine: (functional-280129) Calling .GetState
I1024 19:10:14.809367   24301 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1024 19:10:14.809403   24301 main.go:141] libmachine: Launching plugin server for driver kvm2
I1024 19:10:14.823856   24301 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43823
I1024 19:10:14.824217   24301 main.go:141] libmachine: () Calling .GetVersion
I1024 19:10:14.824675   24301 main.go:141] libmachine: Using API Version  1
I1024 19:10:14.824695   24301 main.go:141] libmachine: () Calling .SetConfigRaw
I1024 19:10:14.825011   24301 main.go:141] libmachine: () Calling .GetMachineName
I1024 19:10:14.825194   24301 main.go:141] libmachine: (functional-280129) Calling .DriverName
I1024 19:10:14.825390   24301 ssh_runner.go:195] Run: systemctl --version
I1024 19:10:14.825416   24301 main.go:141] libmachine: (functional-280129) Calling .GetSSHHostname
I1024 19:10:14.827978   24301 main.go:141] libmachine: (functional-280129) DBG | domain functional-280129 has defined MAC address 52:54:00:47:d1:64 in network mk-functional-280129
I1024 19:10:14.828332   24301 main.go:141] libmachine: (functional-280129) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:d1:64", ip: ""} in network mk-functional-280129: {Iface:virbr1 ExpiryTime:2023-10-24 20:07:05 +0000 UTC Type:0 Mac:52:54:00:47:d1:64 Iaid: IPaddr:192.168.50.11 Prefix:24 Hostname:functional-280129 Clientid:01:52:54:00:47:d1:64}
I1024 19:10:14.828360   24301 main.go:141] libmachine: (functional-280129) DBG | domain functional-280129 has defined IP address 192.168.50.11 and MAC address 52:54:00:47:d1:64 in network mk-functional-280129
I1024 19:10:14.828510   24301 main.go:141] libmachine: (functional-280129) Calling .GetSSHPort
I1024 19:10:14.828681   24301 main.go:141] libmachine: (functional-280129) Calling .GetSSHKeyPath
I1024 19:10:14.828848   24301 main.go:141] libmachine: (functional-280129) Calling .GetSSHUsername
I1024 19:10:14.828974   24301 sshutil.go:53] new ssh client: &{IP:192.168.50.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9104/.minikube/machines/functional-280129/id_rsa Username:docker}
I1024 19:10:14.941717   24301 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I1024 19:10:14.983523   24301 main.go:141] libmachine: Making call to close driver server
I1024 19:10:14.983539   24301 main.go:141] libmachine: (functional-280129) Calling .Close
I1024 19:10:14.983889   24301 main.go:141] libmachine: Successfully made call to close driver server
I1024 19:10:14.983933   24301 main.go:141] libmachine: Making call to close connection to plugin binary
I1024 19:10:14.983949   24301 main.go:141] libmachine: Making call to close driver server
I1024 19:10:14.983960   24301 main.go:141] libmachine: (functional-280129) Calling .Close
I1024 19:10:14.984347   24301 main.go:141] libmachine: Successfully made call to close driver server
I1024 19:10:14.984344   24301 main.go:141] libmachine: (functional-280129) DBG | Closing plugin on server side
I1024 19:10:14.984365   24301 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-280129 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-280129 ssh pgrep buildkitd: exit status 1 (237.364263ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-280129 image build -t localhost/my-image:functional-280129 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-280129 image build -t localhost/my-image:functional-280129 testdata/build --alsologtostderr: (3.505996138s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-280129 image build -t localhost/my-image:functional-280129 testdata/build --alsologtostderr:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
5cc84ad355aa: Pulling fs layer
5cc84ad355aa: Verifying Checksum
5cc84ad355aa: Download complete
5cc84ad355aa: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> beae173ccac6
Step 2/3 : RUN true
---> Running in 3f9c0cd3336a
Removing intermediate container 3f9c0cd3336a
---> f935541ed047
Step 3/3 : ADD content.txt /
---> 7c2875cec05c
Successfully built 7c2875cec05c
Successfully tagged localhost/my-image:functional-280129
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-280129 image build -t localhost/my-image:functional-280129 testdata/build --alsologtostderr:
I1024 19:10:15.238763   24392 out.go:296] Setting OutFile to fd 1 ...
I1024 19:10:15.239021   24392 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1024 19:10:15.239030   24392 out.go:309] Setting ErrFile to fd 2...
I1024 19:10:15.239034   24392 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1024 19:10:15.239236   24392 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17485-9104/.minikube/bin
I1024 19:10:15.239826   24392 config.go:182] Loaded profile config "functional-280129": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
I1024 19:10:15.240303   24392 config.go:182] Loaded profile config "functional-280129": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
I1024 19:10:15.240691   24392 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1024 19:10:15.240732   24392 main.go:141] libmachine: Launching plugin server for driver kvm2
I1024 19:10:15.256773   24392 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42903
I1024 19:10:15.257238   24392 main.go:141] libmachine: () Calling .GetVersion
I1024 19:10:15.257824   24392 main.go:141] libmachine: Using API Version  1
I1024 19:10:15.257845   24392 main.go:141] libmachine: () Calling .SetConfigRaw
I1024 19:10:15.258245   24392 main.go:141] libmachine: () Calling .GetMachineName
I1024 19:10:15.258441   24392 main.go:141] libmachine: (functional-280129) Calling .GetState
I1024 19:10:15.260484   24392 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1024 19:10:15.260532   24392 main.go:141] libmachine: Launching plugin server for driver kvm2
I1024 19:10:15.276237   24392 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41027
I1024 19:10:15.276641   24392 main.go:141] libmachine: () Calling .GetVersion
I1024 19:10:15.277139   24392 main.go:141] libmachine: Using API Version  1
I1024 19:10:15.277187   24392 main.go:141] libmachine: () Calling .SetConfigRaw
I1024 19:10:15.277527   24392 main.go:141] libmachine: () Calling .GetMachineName
I1024 19:10:15.277710   24392 main.go:141] libmachine: (functional-280129) Calling .DriverName
I1024 19:10:15.277903   24392 ssh_runner.go:195] Run: systemctl --version
I1024 19:10:15.277936   24392 main.go:141] libmachine: (functional-280129) Calling .GetSSHHostname
I1024 19:10:15.281080   24392 main.go:141] libmachine: (functional-280129) DBG | domain functional-280129 has defined MAC address 52:54:00:47:d1:64 in network mk-functional-280129
I1024 19:10:15.281434   24392 main.go:141] libmachine: (functional-280129) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:47:d1:64", ip: ""} in network mk-functional-280129: {Iface:virbr1 ExpiryTime:2023-10-24 20:07:05 +0000 UTC Type:0 Mac:52:54:00:47:d1:64 Iaid: IPaddr:192.168.50.11 Prefix:24 Hostname:functional-280129 Clientid:01:52:54:00:47:d1:64}
I1024 19:10:15.281465   24392 main.go:141] libmachine: (functional-280129) DBG | domain functional-280129 has defined IP address 192.168.50.11 and MAC address 52:54:00:47:d1:64 in network mk-functional-280129
I1024 19:10:15.281538   24392 main.go:141] libmachine: (functional-280129) Calling .GetSSHPort
I1024 19:10:15.281698   24392 main.go:141] libmachine: (functional-280129) Calling .GetSSHKeyPath
I1024 19:10:15.281833   24392 main.go:141] libmachine: (functional-280129) Calling .GetSSHUsername
I1024 19:10:15.281985   24392 sshutil.go:53] new ssh client: &{IP:192.168.50.11 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9104/.minikube/machines/functional-280129/id_rsa Username:docker}
I1024 19:10:15.382836   24392 build_images.go:151] Building image from path: /tmp/build.922730224.tar
I1024 19:10:15.382886   24392 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1024 19:10:15.395363   24392 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.922730224.tar
I1024 19:10:15.400166   24392 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.922730224.tar: stat -c "%s %y" /var/lib/minikube/build/build.922730224.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.922730224.tar': No such file or directory
I1024 19:10:15.400200   24392 ssh_runner.go:362] scp /tmp/build.922730224.tar --> /var/lib/minikube/build/build.922730224.tar (3072 bytes)
I1024 19:10:15.434476   24392 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.922730224
I1024 19:10:15.449885   24392 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.922730224 -xf /var/lib/minikube/build/build.922730224.tar
I1024 19:10:15.468668   24392 docker.go:341] Building image: /var/lib/minikube/build/build.922730224
I1024 19:10:15.468739   24392 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-280129 /var/lib/minikube/build/build.922730224
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
Install the buildx component to build images with BuildKit:
https://docs.docker.com/go/buildx/

                                                
                                                
I1024 19:10:18.652278   24392 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-280129 /var/lib/minikube/build/build.922730224: (3.183513979s)
I1024 19:10:18.652361   24392 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.922730224
I1024 19:10:18.662915   24392 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.922730224.tar
I1024 19:10:18.683374   24392 build_images.go:207] Built localhost/my-image:functional-280129 from /tmp/build.922730224.tar
I1024 19:10:18.683412   24392 build_images.go:123] succeeded building to: functional-280129
I1024 19:10:18.683416   24392 build_images.go:124] failed building to: 
I1024 19:10:18.683467   24392 main.go:141] libmachine: Making call to close driver server
I1024 19:10:18.683481   24392 main.go:141] libmachine: (functional-280129) Calling .Close
I1024 19:10:18.683749   24392 main.go:141] libmachine: (functional-280129) DBG | Closing plugin on server side
I1024 19:10:18.683780   24392 main.go:141] libmachine: Successfully made call to close driver server
I1024 19:10:18.683799   24392 main.go:141] libmachine: Making call to close connection to plugin binary
I1024 19:10:18.683817   24392 main.go:141] libmachine: Making call to close driver server
I1024 19:10:18.683842   24392 main.go:141] libmachine: (functional-280129) Calling .Close
I1024 19:10:18.684065   24392 main.go:141] libmachine: Successfully made call to close driver server
I1024 19:10:18.684086   24392 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-280129 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.99s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.319325084s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-280129
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.34s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-280129 docker-env) && out/minikube-linux-amd64 status -p functional-280129"
functional_test.go:518: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-280129 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.00s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-280129 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-280129 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-280129 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-280129 image load --daemon gcr.io/google-containers/addon-resizer:functional-280129 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-280129 image load --daemon gcr.io/google-containers/addon-resizer:functional-280129 --alsologtostderr: (5.102328209s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-280129 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.35s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1314: Took "230.527583ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1328: Took "59.953486ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1365: Took "232.288579ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1378: Took "66.777722ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-280129 image load --daemon gcr.io/google-containers/addon-resizer:functional-280129 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-280129 image load --daemon gcr.io/google-containers/addon-resizer:functional-280129 --alsologtostderr: (2.739894538s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-280129 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.97s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.602091883s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-280129
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-280129 image load --daemon gcr.io/google-containers/addon-resizer:functional-280129 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-280129 image load --daemon gcr.io/google-containers/addon-resizer:functional-280129 --alsologtostderr: (3.7115614s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-280129 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.59s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-280129 image save gcr.io/google-containers/addon-resizer:functional-280129 /home/jenkins/workspace/KVM_Linux_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-280129 image save gcr.io/google-containers/addon-resizer:functional-280129 /home/jenkins/workspace/KVM_Linux_integration/addon-resizer-save.tar --alsologtostderr: (1.957920465s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.96s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-280129 image rm gcr.io/google-containers/addon-resizer:functional-280129 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-280129 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-280129 image load /home/jenkins/workspace/KVM_Linux_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-280129 image load /home/jenkins/workspace/KVM_Linux_integration/addon-resizer-save.tar --alsologtostderr: (1.596873674s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-280129 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.83s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-280129
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-280129 image save --daemon gcr.io/google-containers/addon-resizer:functional-280129 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-amd64 -p functional-280129 image save --daemon gcr.io/google-containers/addon-resizer:functional-280129 --alsologtostderr: (1.889044786s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-280129
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.93s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (23.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1438: (dbg) Run:  kubectl --context functional-280129 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-280129 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-69psw" [a9957d75-ef13-4fca-b55f-3bf215e1d260] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-69psw" [a9957d75-ef13-4fca-b55f-3bf215e1d260] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 23.012919903s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (23.22s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-280129 /tmp/TestFunctionalparallelMountCmdany-port2255815577/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1698174601949572547" to /tmp/TestFunctionalparallelMountCmdany-port2255815577/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1698174601949572547" to /tmp/TestFunctionalparallelMountCmdany-port2255815577/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1698174601949572547" to /tmp/TestFunctionalparallelMountCmdany-port2255815577/001/test-1698174601949572547
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-280129 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-280129 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (238.769009ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-280129 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-280129 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct 24 19:10 created-by-test
-rw-r--r-- 1 docker docker 24 Oct 24 19:10 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct 24 19:10 test-1698174601949572547
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-280129 ssh cat /mount-9p/test-1698174601949572547
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-280129 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [27ecd71e-b8d6-4999-8698-2e0fe59c6430] Pending
helpers_test.go:344: "busybox-mount" [27ecd71e-b8d6-4999-8698-2e0fe59c6430] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [27ecd71e-b8d6-4999-8698-2e0fe59c6430] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [27ecd71e-b8d6-4999-8698-2e0fe59c6430] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.022617014s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-280129 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-280129 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-280129 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-280129 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-280129 /tmp/TestFunctionalparallelMountCmdany-port2255815577/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.10s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-280129 /tmp/TestFunctionalparallelMountCmdspecific-port2179953561/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-280129 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-280129 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (283.33406ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-280129 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-280129 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-280129 /tmp/TestFunctionalparallelMountCmdspecific-port2179953561/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-280129 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-280129 ssh "sudo umount -f /mount-9p": exit status 1 (231.928233ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-280129 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-280129 /tmp/TestFunctionalparallelMountCmdspecific-port2179953561/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.94s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-linux-amd64 -p functional-280129 service list
functional_test.go:1458: (dbg) Done: out/minikube-linux-amd64 -p functional-280129 service list: (1.313737798s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.31s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-280129 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3328742515/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-280129 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3328742515/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-280129 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3328742515/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-280129 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-280129 ssh "findmnt -T" /mount1: exit status 1 (287.375396ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-280129 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-280129 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-280129 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-280129 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-280129 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3328742515/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-280129 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3328742515/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-280129 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3328742515/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.60s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-linux-amd64 -p functional-280129 service list -o json
functional_test.go:1488: (dbg) Done: out/minikube-linux-amd64 -p functional-280129 service list -o json: (1.406466641s)
functional_test.go:1493: Took "1.406555198s" to run "out/minikube-linux-amd64 -p functional-280129 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.41s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-linux-amd64 -p functional-280129 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.50.11:30742
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-linux-amd64 -p functional-280129 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-linux-amd64 -p functional-280129 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.50.11:30742
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.43s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-280129
--- PASS: TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-280129
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-280129
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestGvisorAddon (314.52s)

                                                
                                                
=== RUN   TestGvisorAddon
=== PAUSE TestGvisorAddon

                                                
                                                

                                                
                                                
=== CONT  TestGvisorAddon
gvisor_addon_test.go:52: (dbg) Run:  out/minikube-linux-amd64 start -p gvisor-003752 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 
gvisor_addon_test.go:52: (dbg) Done: out/minikube-linux-amd64 start -p gvisor-003752 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 : (1m31.948087691s)
gvisor_addon_test.go:58: (dbg) Run:  out/minikube-linux-amd64 -p gvisor-003752 cache add gcr.io/k8s-minikube/gvisor-addon:2
gvisor_addon_test.go:58: (dbg) Done: out/minikube-linux-amd64 -p gvisor-003752 cache add gcr.io/k8s-minikube/gvisor-addon:2: (25.536073061s)
gvisor_addon_test.go:63: (dbg) Run:  out/minikube-linux-amd64 -p gvisor-003752 addons enable gvisor
gvisor_addon_test.go:63: (dbg) Done: out/minikube-linux-amd64 -p gvisor-003752 addons enable gvisor: (3.832107224s)
gvisor_addon_test.go:68: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "kubernetes.io/minikube-addons=gvisor" in namespace "kube-system" ...
helpers_test.go:344: "gvisor" [20fcb529-bb9d-404f-a10d-fa9c1bb57212] Running
gvisor_addon_test.go:68: (dbg) TestGvisorAddon: kubernetes.io/minikube-addons=gvisor healthy within 5.020792861s
gvisor_addon_test.go:73: (dbg) Run:  kubectl --context gvisor-003752 replace --force -f testdata/nginx-gvisor.yaml
gvisor_addon_test.go:78: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "run=nginx,runtime=gvisor" in namespace "default" ...
helpers_test.go:344: "nginx-gvisor" [9efb320b-ccb9-4d96-b47c-a078a46bc7fb] Pending
helpers_test.go:344: "nginx-gvisor" [9efb320b-ccb9-4d96-b47c-a078a46bc7fb] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
E1024 19:41:04.732133   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/skaffold-807036/client.crt: no such file or directory
E1024 19:41:04.737488   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/skaffold-807036/client.crt: no such file or directory
E1024 19:41:04.747817   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/skaffold-807036/client.crt: no such file or directory
E1024 19:41:04.768157   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/skaffold-807036/client.crt: no such file or directory
E1024 19:41:04.808469   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/skaffold-807036/client.crt: no such file or directory
E1024 19:41:04.888804   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/skaffold-807036/client.crt: no such file or directory
E1024 19:41:05.049601   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/skaffold-807036/client.crt: no such file or directory
E1024 19:41:05.370789   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/skaffold-807036/client.crt: no such file or directory
E1024 19:41:06.011147   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/skaffold-807036/client.crt: no such file or directory
E1024 19:41:07.292204   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/skaffold-807036/client.crt: no such file or directory
helpers_test.go:344: "nginx-gvisor" [9efb320b-ccb9-4d96-b47c-a078a46bc7fb] Running
E1024 19:41:09.852422   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/skaffold-807036/client.crt: no such file or directory
gvisor_addon_test.go:78: (dbg) TestGvisorAddon: run=nginx,runtime=gvisor healthy within 14.035730858s
gvisor_addon_test.go:83: (dbg) Run:  out/minikube-linux-amd64 stop -p gvisor-003752
E1024 19:41:14.974216   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/skaffold-807036/client.crt: no such file or directory
E1024 19:41:25.214465   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/skaffold-807036/client.crt: no such file or directory
gvisor_addon_test.go:83: (dbg) Done: out/minikube-linux-amd64 stop -p gvisor-003752: (1m32.718193375s)
gvisor_addon_test.go:88: (dbg) Run:  out/minikube-linux-amd64 start -p gvisor-003752 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 
E1024 19:43:01.370792   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/ingress-addon-legacy-855804/client.crt: no such file or directory
gvisor_addon_test.go:88: (dbg) Done: out/minikube-linux-amd64 start -p gvisor-003752 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 : (1m9.905073953s)
gvisor_addon_test.go:92: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "kubernetes.io/minikube-addons=gvisor" in namespace "kube-system" ...
helpers_test.go:344: "gvisor" [20fcb529-bb9d-404f-a10d-fa9c1bb57212] Running / Ready:ContainersNotReady (containers with unready status: [gvisor]) / ContainersReady:ContainersNotReady (containers with unready status: [gvisor])
helpers_test.go:344: "gvisor" [20fcb529-bb9d-404f-a10d-fa9c1bb57212] Running
gvisor_addon_test.go:92: (dbg) TestGvisorAddon: kubernetes.io/minikube-addons=gvisor healthy within 5.028706131s
gvisor_addon_test.go:95: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "run=nginx,runtime=gvisor" in namespace "default" ...
helpers_test.go:344: "nginx-gvisor" [9efb320b-ccb9-4d96-b47c-a078a46bc7fb] Running / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
gvisor_addon_test.go:95: (dbg) TestGvisorAddon: run=nginx,runtime=gvisor healthy within 5.012717581s
helpers_test.go:175: Cleaning up "gvisor-003752" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p gvisor-003752
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p gvisor-003752: (1.206801372s)
--- PASS: TestGvisorAddon (314.52s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (52.65s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -p image-583912 --driver=kvm2 
image_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -p image-583912 --driver=kvm2 : (52.646377213s)
--- PASS: TestImageBuild/serial/Setup (52.65s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.6s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-583912
image_test.go:78: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-583912: (1.602903212s)
--- PASS: TestImageBuild/serial/NormalBuild (1.60s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (1.32s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-583912
image_test.go:99: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-583912: (1.318249324s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (1.32s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.38s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-583912
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.38s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.29s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-583912
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.29s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (76.75s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-855804 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2 
E1024 19:11:28.408627   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/addons-903896/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-855804 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2 : (1m16.750029414s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (76.75s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (17.4s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-855804 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-855804 addons enable ingress --alsologtostderr -v=5: (17.394907366s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (17.40s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.64s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-855804 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.64s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (38.27s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:206: (dbg) Run:  kubectl --context ingress-addon-legacy-855804 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:206: (dbg) Done: kubectl --context ingress-addon-legacy-855804 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (8.414141045s)
addons_test.go:231: (dbg) Run:  kubectl --context ingress-addon-legacy-855804 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:231: (dbg) Non-zero exit: kubectl --context ingress-addon-legacy-855804 replace --force -f testdata/nginx-ingress-v1beta1.yaml: exit status 1 (313.999318ms)

                                                
                                                
** stderr ** 
	Error from server (InternalError): Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io": Post https://ingress-nginx-controller-admission.ingress-nginx.svc:443/networking/v1beta1/ingresses?timeout=10s: dial tcp 10.98.210.132:443: connect: connection refused

                                                
                                                
** /stderr **
addons_test.go:231: (dbg) Run:  kubectl --context ingress-addon-legacy-855804 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:244: (dbg) Run:  kubectl --context ingress-addon-legacy-855804 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:249: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [7fe28ee9-6600-42d0-a0ad-c224c96aa6cf] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [7fe28ee9-6600-42d0-a0ad-c224c96aa6cf] Running
addons_test.go:249: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 10.019465412s
addons_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-855804 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:285: (dbg) Run:  kubectl --context ingress-addon-legacy-855804 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-855804 ip
addons_test.go:296: (dbg) Run:  nslookup hello-john.test 192.168.39.210
addons_test.go:305: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-855804 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:305: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-855804 addons disable ingress-dns --alsologtostderr -v=1: (10.266263782s)
addons_test.go:310: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-855804 addons disable ingress --alsologtostderr -v=1
addons_test.go:310: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-855804 addons disable ingress --alsologtostderr -v=1: (7.468626935s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddons (38.27s)

                                                
                                    
x
+
TestJSONOutput/start/Command (70.55s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-517368 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2 
E1024 19:13:44.562264   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/addons-903896/client.crt: no such file or directory
E1024 19:14:12.248953   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/addons-903896/client.crt: no such file or directory
E1024 19:14:26.867883   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/functional-280129/client.crt: no such file or directory
E1024 19:14:26.873244   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/functional-280129/client.crt: no such file or directory
E1024 19:14:26.883521   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/functional-280129/client.crt: no such file or directory
E1024 19:14:26.903842   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/functional-280129/client.crt: no such file or directory
E1024 19:14:26.944135   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/functional-280129/client.crt: no such file or directory
E1024 19:14:27.024458   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/functional-280129/client.crt: no such file or directory
E1024 19:14:27.184887   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/functional-280129/client.crt: no such file or directory
E1024 19:14:27.505476   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/functional-280129/client.crt: no such file or directory
E1024 19:14:28.146385   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/functional-280129/client.crt: no such file or directory
E1024 19:14:29.427587   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/functional-280129/client.crt: no such file or directory
E1024 19:14:31.988068   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/functional-280129/client.crt: no such file or directory
E1024 19:14:37.108948   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/functional-280129/client.crt: no such file or directory
E1024 19:14:47.349433   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/functional-280129/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-517368 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2 : (1m10.54719607s)
--- PASS: TestJSONOutput/start/Command (70.55s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.58s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-517368 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.58s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.56s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-517368 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.56s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (8.11s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-517368 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-517368 --output=json --user=testUser: (8.106713635s)
--- PASS: TestJSONOutput/stop/Command (8.11s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.21s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-963602 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-963602 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (77.725737ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"e5fa1892-f548-44d8-8041-7cc3453ba5ea","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-963602] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"266a7d37-35e5-4da6-81cd-b47a6465f917","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17485"}}
	{"specversion":"1.0","id":"ac5748a3-97b6-4b46-afd2-e7eb48b6eb1a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"81663dce-860b-4e63-86d6-8a3150d880bf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17485-9104/kubeconfig"}}
	{"specversion":"1.0","id":"332291cb-31f3-414e-86ac-23ceb96d216d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17485-9104/.minikube"}}
	{"specversion":"1.0","id":"e294782a-1208-4c92-be91-357d6e2dc289","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"7c1c18e9-111b-4a09-b36e-d2c9c64431d3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"9fa0d5e4-5eba-4632-9ab0-05253b5386f1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-963602" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-963602
--- PASS: TestErrorJSONOutput (0.21s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (108.39s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-894453 --driver=kvm2 
E1024 19:15:07.829845   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/functional-280129/client.crt: no such file or directory
E1024 19:15:48.790186   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/functional-280129/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-894453 --driver=kvm2 : (55.01480462s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-897208 --driver=kvm2 
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-897208 --driver=kvm2 : (50.668327128s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-894453
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-897208
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-897208" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-897208
helpers_test.go:175: Cleaning up "first-894453" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-894453
--- PASS: TestMinikubeProfile (108.39s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (28.69s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-551636 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2 
E1024 19:17:10.710708   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/functional-280129/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-551636 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2 : (27.685608127s)
--- PASS: TestMountStart/serial/StartWithMountFirst (28.69s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-551636 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-551636 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.39s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (29.45s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-573799 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2 
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-573799 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2 : (28.450872505s)
--- PASS: TestMountStart/serial/StartWithMountSecond (29.45s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-573799 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-573799 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.40s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.89s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-551636 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.89s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.41s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-573799 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-573799 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.41s)

                                                
                                    
x
+
TestMountStart/serial/Stop (11.17s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-573799
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-573799: (11.172357604s)
--- PASS: TestMountStart/serial/Stop (11.17s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (24.5s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-573799
E1024 19:18:01.370308   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/ingress-addon-legacy-855804/client.crt: no such file or directory
E1024 19:18:01.375558   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/ingress-addon-legacy-855804/client.crt: no such file or directory
E1024 19:18:01.385875   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/ingress-addon-legacy-855804/client.crt: no such file or directory
E1024 19:18:01.406114   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/ingress-addon-legacy-855804/client.crt: no such file or directory
E1024 19:18:01.446408   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/ingress-addon-legacy-855804/client.crt: no such file or directory
E1024 19:18:01.526743   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/ingress-addon-legacy-855804/client.crt: no such file or directory
E1024 19:18:01.687138   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/ingress-addon-legacy-855804/client.crt: no such file or directory
E1024 19:18:02.007818   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/ingress-addon-legacy-855804/client.crt: no such file or directory
E1024 19:18:02.648702   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/ingress-addon-legacy-855804/client.crt: no such file or directory
E1024 19:18:03.929189   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/ingress-addon-legacy-855804/client.crt: no such file or directory
E1024 19:18:06.490152   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/ingress-addon-legacy-855804/client.crt: no such file or directory
E1024 19:18:11.610996   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/ingress-addon-legacy-855804/client.crt: no such file or directory
E1024 19:18:21.851774   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/ingress-addon-legacy-855804/client.crt: no such file or directory
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-573799: (23.501562442s)
--- PASS: TestMountStart/serial/RestartStopped (24.50s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-573799 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-573799 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.40s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (132.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-313452 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2 
E1024 19:18:42.332472   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/ingress-addon-legacy-855804/client.crt: no such file or directory
E1024 19:18:44.562912   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/addons-903896/client.crt: no such file or directory
E1024 19:19:23.293273   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/ingress-addon-legacy-855804/client.crt: no such file or directory
E1024 19:19:26.867322   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/functional-280129/client.crt: no such file or directory
E1024 19:19:54.550965   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/functional-280129/client.crt: no such file or directory
multinode_test.go:85: (dbg) Done: out/minikube-linux-amd64 start -p multinode-313452 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2 : (2m12.406779153s)
multinode_test.go:91: (dbg) Run:  out/minikube-linux-amd64 -p multinode-313452 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (132.85s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:481: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-313452 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:486: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-313452 -- rollout status deployment/busybox
multinode_test.go:486: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-313452 -- rollout status deployment/busybox: (4.556828908s)
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-313452 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:516: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-313452 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:524: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-313452 -- exec busybox-5bc68d56bd-lmmfw -- nslookup kubernetes.io
multinode_test.go:524: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-313452 -- exec busybox-5bc68d56bd-s9dzp -- nslookup kubernetes.io
E1024 19:20:45.213425   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/ingress-addon-legacy-855804/client.crt: no such file or directory
multinode_test.go:534: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-313452 -- exec busybox-5bc68d56bd-lmmfw -- nslookup kubernetes.default
multinode_test.go:534: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-313452 -- exec busybox-5bc68d56bd-s9dzp -- nslookup kubernetes.default
multinode_test.go:542: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-313452 -- exec busybox-5bc68d56bd-lmmfw -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:542: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-313452 -- exec busybox-5bc68d56bd-s9dzp -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.38s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-313452 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:560: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-313452 -- exec busybox-5bc68d56bd-lmmfw -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-313452 -- exec busybox-5bc68d56bd-lmmfw -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:560: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-313452 -- exec busybox-5bc68d56bd-s9dzp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-313452 -- exec busybox-5bc68d56bd-s9dzp -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.93s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (52.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:110: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-313452 -v 3 --alsologtostderr
multinode_test.go:110: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-313452 -v 3 --alsologtostderr: (52.242046185s)
multinode_test.go:116: (dbg) Run:  out/minikube-linux-amd64 -p multinode-313452 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (52.82s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:132: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.22s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:173: (dbg) Run:  out/minikube-linux-amd64 -p multinode-313452 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-313452 cp testdata/cp-test.txt multinode-313452:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-313452 ssh -n multinode-313452 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-313452 cp multinode-313452:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1490151278/001/cp-test_multinode-313452.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-313452 ssh -n multinode-313452 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-313452 cp multinode-313452:/home/docker/cp-test.txt multinode-313452-m02:/home/docker/cp-test_multinode-313452_multinode-313452-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-313452 ssh -n multinode-313452 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-313452 ssh -n multinode-313452-m02 "sudo cat /home/docker/cp-test_multinode-313452_multinode-313452-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-313452 cp multinode-313452:/home/docker/cp-test.txt multinode-313452-m03:/home/docker/cp-test_multinode-313452_multinode-313452-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-313452 ssh -n multinode-313452 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-313452 ssh -n multinode-313452-m03 "sudo cat /home/docker/cp-test_multinode-313452_multinode-313452-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-313452 cp testdata/cp-test.txt multinode-313452-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-313452 ssh -n multinode-313452-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-313452 cp multinode-313452-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1490151278/001/cp-test_multinode-313452-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-313452 ssh -n multinode-313452-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-313452 cp multinode-313452-m02:/home/docker/cp-test.txt multinode-313452:/home/docker/cp-test_multinode-313452-m02_multinode-313452.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-313452 ssh -n multinode-313452-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-313452 ssh -n multinode-313452 "sudo cat /home/docker/cp-test_multinode-313452-m02_multinode-313452.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-313452 cp multinode-313452-m02:/home/docker/cp-test.txt multinode-313452-m03:/home/docker/cp-test_multinode-313452-m02_multinode-313452-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-313452 ssh -n multinode-313452-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-313452 ssh -n multinode-313452-m03 "sudo cat /home/docker/cp-test_multinode-313452-m02_multinode-313452-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-313452 cp testdata/cp-test.txt multinode-313452-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-313452 ssh -n multinode-313452-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-313452 cp multinode-313452-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1490151278/001/cp-test_multinode-313452-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-313452 ssh -n multinode-313452-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-313452 cp multinode-313452-m03:/home/docker/cp-test.txt multinode-313452:/home/docker/cp-test_multinode-313452-m03_multinode-313452.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-313452 ssh -n multinode-313452-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-313452 ssh -n multinode-313452 "sudo cat /home/docker/cp-test_multinode-313452-m03_multinode-313452.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-313452 cp multinode-313452-m03:/home/docker/cp-test.txt multinode-313452-m02:/home/docker/cp-test_multinode-313452-m03_multinode-313452-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-313452 ssh -n multinode-313452-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-313452 ssh -n multinode-313452-m02 "sudo cat /home/docker/cp-test_multinode-313452-m03_multinode-313452-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.79s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (4.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:210: (dbg) Run:  out/minikube-linux-amd64 -p multinode-313452 node stop m03
multinode_test.go:210: (dbg) Done: out/minikube-linux-amd64 -p multinode-313452 node stop m03: (3.102148302s)
multinode_test.go:216: (dbg) Run:  out/minikube-linux-amd64 -p multinode-313452 status
multinode_test.go:216: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-313452 status: exit status 7 (496.707667ms)

                                                
                                                
-- stdout --
	multinode-313452
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-313452-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-313452-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:223: (dbg) Run:  out/minikube-linux-amd64 -p multinode-313452 status --alsologtostderr
multinode_test.go:223: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-313452 status --alsologtostderr: exit status 7 (468.121333ms)

                                                
                                                
-- stdout --
	multinode-313452
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-313452-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-313452-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1024 19:21:51.557925   31526 out.go:296] Setting OutFile to fd 1 ...
	I1024 19:21:51.558238   31526 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 19:21:51.558251   31526 out.go:309] Setting ErrFile to fd 2...
	I1024 19:21:51.558258   31526 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 19:21:51.558545   31526 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17485-9104/.minikube/bin
	I1024 19:21:51.558722   31526 out.go:303] Setting JSON to false
	I1024 19:21:51.558764   31526 mustload.go:65] Loading cluster: multinode-313452
	I1024 19:21:51.559289   31526 config.go:182] Loaded profile config "multinode-313452": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1024 19:21:51.559315   31526 status.go:255] checking status of multinode-313452 ...
	I1024 19:21:51.559851   31526 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1024 19:21:51.559915   31526 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:21:51.560007   31526 notify.go:220] Checking for updates...
	I1024 19:21:51.581812   31526 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40115
	I1024 19:21:51.582252   31526 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:21:51.582865   31526 main.go:141] libmachine: Using API Version  1
	I1024 19:21:51.582894   31526 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:21:51.583259   31526 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:21:51.583547   31526 main.go:141] libmachine: (multinode-313452) Calling .GetState
	I1024 19:21:51.585108   31526 status.go:330] multinode-313452 host status = "Running" (err=<nil>)
	I1024 19:21:51.585127   31526 host.go:66] Checking if "multinode-313452" exists ...
	I1024 19:21:51.585383   31526 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1024 19:21:51.585457   31526 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:21:51.599261   31526 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36551
	I1024 19:21:51.599673   31526 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:21:51.600217   31526 main.go:141] libmachine: Using API Version  1
	I1024 19:21:51.600243   31526 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:21:51.600556   31526 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:21:51.600731   31526 main.go:141] libmachine: (multinode-313452) Calling .GetIP
	I1024 19:21:51.603499   31526 main.go:141] libmachine: (multinode-313452) DBG | domain multinode-313452 has defined MAC address 52:54:00:be:a1:7c in network mk-multinode-313452
	I1024 19:21:51.603937   31526 main.go:141] libmachine: (multinode-313452) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:a1:7c", ip: ""} in network mk-multinode-313452: {Iface:virbr1 ExpiryTime:2023-10-24 20:18:42 +0000 UTC Type:0 Mac:52:54:00:be:a1:7c Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:multinode-313452 Clientid:01:52:54:00:be:a1:7c}
	I1024 19:21:51.603978   31526 main.go:141] libmachine: (multinode-313452) DBG | domain multinode-313452 has defined IP address 192.168.39.178 and MAC address 52:54:00:be:a1:7c in network mk-multinode-313452
	I1024 19:21:51.604091   31526 host.go:66] Checking if "multinode-313452" exists ...
	I1024 19:21:51.604438   31526 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1024 19:21:51.604475   31526 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:21:51.618618   31526 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40289
	I1024 19:21:51.619001   31526 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:21:51.619486   31526 main.go:141] libmachine: Using API Version  1
	I1024 19:21:51.619508   31526 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:21:51.619789   31526 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:21:51.619962   31526 main.go:141] libmachine: (multinode-313452) Calling .DriverName
	I1024 19:21:51.620128   31526 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1024 19:21:51.620163   31526 main.go:141] libmachine: (multinode-313452) Calling .GetSSHHostname
	I1024 19:21:51.624162   31526 main.go:141] libmachine: (multinode-313452) DBG | domain multinode-313452 has defined MAC address 52:54:00:be:a1:7c in network mk-multinode-313452
	I1024 19:21:51.624708   31526 main.go:141] libmachine: (multinode-313452) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:a1:7c", ip: ""} in network mk-multinode-313452: {Iface:virbr1 ExpiryTime:2023-10-24 20:18:42 +0000 UTC Type:0 Mac:52:54:00:be:a1:7c Iaid: IPaddr:192.168.39.178 Prefix:24 Hostname:multinode-313452 Clientid:01:52:54:00:be:a1:7c}
	I1024 19:21:51.624744   31526 main.go:141] libmachine: (multinode-313452) DBG | domain multinode-313452 has defined IP address 192.168.39.178 and MAC address 52:54:00:be:a1:7c in network mk-multinode-313452
	I1024 19:21:51.624886   31526 main.go:141] libmachine: (multinode-313452) Calling .GetSSHPort
	I1024 19:21:51.625084   31526 main.go:141] libmachine: (multinode-313452) Calling .GetSSHKeyPath
	I1024 19:21:51.625230   31526 main.go:141] libmachine: (multinode-313452) Calling .GetSSHUsername
	I1024 19:21:51.625420   31526 sshutil.go:53] new ssh client: &{IP:192.168.39.178 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9104/.minikube/machines/multinode-313452/id_rsa Username:docker}
	I1024 19:21:51.718524   31526 ssh_runner.go:195] Run: systemctl --version
	I1024 19:21:51.725026   31526 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1024 19:21:51.740668   31526 kubeconfig.go:92] found "multinode-313452" server: "https://192.168.39.178:8443"
	I1024 19:21:51.740696   31526 api_server.go:166] Checking apiserver status ...
	I1024 19:21:51.740731   31526 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1024 19:21:51.754874   31526 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1912/cgroup
	I1024 19:21:51.763969   31526 api_server.go:182] apiserver freezer: "2:freezer:/kubepods/burstable/podcb39dfd8f95f554d9af793e282093c9e/93f7f142e48f2244d87a219a1db1bbbd2f01ee94f44f3de1e4b44b1aa91d738f"
	I1024 19:21:51.764028   31526 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/podcb39dfd8f95f554d9af793e282093c9e/93f7f142e48f2244d87a219a1db1bbbd2f01ee94f44f3de1e4b44b1aa91d738f/freezer.state
	I1024 19:21:51.772844   31526 api_server.go:204] freezer state: "THAWED"
	I1024 19:21:51.772867   31526 api_server.go:253] Checking apiserver healthz at https://192.168.39.178:8443/healthz ...
	I1024 19:21:51.777936   31526 api_server.go:279] https://192.168.39.178:8443/healthz returned 200:
	ok
	I1024 19:21:51.777961   31526 status.go:421] multinode-313452 apiserver status = Running (err=<nil>)
	I1024 19:21:51.777973   31526 status.go:257] multinode-313452 status: &{Name:multinode-313452 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1024 19:21:51.777994   31526 status.go:255] checking status of multinode-313452-m02 ...
	I1024 19:21:51.778339   31526 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1024 19:21:51.778380   31526 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:21:51.792828   31526 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46315
	I1024 19:21:51.793190   31526 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:21:51.793602   31526 main.go:141] libmachine: Using API Version  1
	I1024 19:21:51.793628   31526 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:21:51.793917   31526 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:21:51.794107   31526 main.go:141] libmachine: (multinode-313452-m02) Calling .GetState
	I1024 19:21:51.795563   31526 status.go:330] multinode-313452-m02 host status = "Running" (err=<nil>)
	I1024 19:21:51.795575   31526 host.go:66] Checking if "multinode-313452-m02" exists ...
	I1024 19:21:51.795893   31526 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1024 19:21:51.795945   31526 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:21:51.810620   31526 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42757
	I1024 19:21:51.811028   31526 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:21:51.811568   31526 main.go:141] libmachine: Using API Version  1
	I1024 19:21:51.811593   31526 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:21:51.811950   31526 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:21:51.812145   31526 main.go:141] libmachine: (multinode-313452-m02) Calling .GetIP
	I1024 19:21:51.814983   31526 main.go:141] libmachine: (multinode-313452-m02) DBG | domain multinode-313452-m02 has defined MAC address 52:54:00:6b:ae:d0 in network mk-multinode-313452
	I1024 19:21:51.815310   31526 main.go:141] libmachine: (multinode-313452-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:ae:d0", ip: ""} in network mk-multinode-313452: {Iface:virbr1 ExpiryTime:2023-10-24 20:20:03 +0000 UTC Type:0 Mac:52:54:00:6b:ae:d0 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:multinode-313452-m02 Clientid:01:52:54:00:6b:ae:d0}
	I1024 19:21:51.815340   31526 main.go:141] libmachine: (multinode-313452-m02) DBG | domain multinode-313452-m02 has defined IP address 192.168.39.159 and MAC address 52:54:00:6b:ae:d0 in network mk-multinode-313452
	I1024 19:21:51.815457   31526 host.go:66] Checking if "multinode-313452-m02" exists ...
	I1024 19:21:51.815819   31526 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1024 19:21:51.815853   31526 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:21:51.830279   31526 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37877
	I1024 19:21:51.830668   31526 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:21:51.831182   31526 main.go:141] libmachine: Using API Version  1
	I1024 19:21:51.831208   31526 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:21:51.831530   31526 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:21:51.831704   31526 main.go:141] libmachine: (multinode-313452-m02) Calling .DriverName
	I1024 19:21:51.831867   31526 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1024 19:21:51.831891   31526 main.go:141] libmachine: (multinode-313452-m02) Calling .GetSSHHostname
	I1024 19:21:51.834869   31526 main.go:141] libmachine: (multinode-313452-m02) DBG | domain multinode-313452-m02 has defined MAC address 52:54:00:6b:ae:d0 in network mk-multinode-313452
	I1024 19:21:51.835299   31526 main.go:141] libmachine: (multinode-313452-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:ae:d0", ip: ""} in network mk-multinode-313452: {Iface:virbr1 ExpiryTime:2023-10-24 20:20:03 +0000 UTC Type:0 Mac:52:54:00:6b:ae:d0 Iaid: IPaddr:192.168.39.159 Prefix:24 Hostname:multinode-313452-m02 Clientid:01:52:54:00:6b:ae:d0}
	I1024 19:21:51.835338   31526 main.go:141] libmachine: (multinode-313452-m02) DBG | domain multinode-313452-m02 has defined IP address 192.168.39.159 and MAC address 52:54:00:6b:ae:d0 in network mk-multinode-313452
	I1024 19:21:51.835447   31526 main.go:141] libmachine: (multinode-313452-m02) Calling .GetSSHPort
	I1024 19:21:51.835602   31526 main.go:141] libmachine: (multinode-313452-m02) Calling .GetSSHKeyPath
	I1024 19:21:51.835721   31526 main.go:141] libmachine: (multinode-313452-m02) Calling .GetSSHUsername
	I1024 19:21:51.835841   31526 sshutil.go:53] new ssh client: &{IP:192.168.39.159 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17485-9104/.minikube/machines/multinode-313452-m02/id_rsa Username:docker}
	I1024 19:21:51.934228   31526 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1024 19:21:51.949074   31526 status.go:257] multinode-313452-m02 status: &{Name:multinode-313452-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1024 19:21:51.949121   31526 status.go:255] checking status of multinode-313452-m03 ...
	I1024 19:21:51.949509   31526 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1024 19:21:51.949561   31526 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:21:51.964591   31526 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43433
	I1024 19:21:51.965009   31526 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:21:51.965500   31526 main.go:141] libmachine: Using API Version  1
	I1024 19:21:51.965527   31526 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:21:51.965839   31526 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:21:51.966033   31526 main.go:141] libmachine: (multinode-313452-m03) Calling .GetState
	I1024 19:21:51.967350   31526 status.go:330] multinode-313452-m03 host status = "Stopped" (err=<nil>)
	I1024 19:21:51.967362   31526 status.go:343] host is not running, skipping remaining checks
	I1024 19:21:51.967368   31526 status.go:257] multinode-313452-m03 status: &{Name:multinode-313452-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (4.07s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (32.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-313452 node start m03 --alsologtostderr
multinode_test.go:254: (dbg) Done: out/minikube-linux-amd64 -p multinode-313452 node start m03 --alsologtostderr: (31.713928262s)
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-313452 status
multinode_test.go:275: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (32.36s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (174.43s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:283: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-313452
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-313452
multinode_test.go:290: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-313452: (27.82442486s)
multinode_test.go:295: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-313452 --wait=true -v=8 --alsologtostderr
E1024 19:23:01.370331   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/ingress-addon-legacy-855804/client.crt: no such file or directory
E1024 19:23:29.054470   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/ingress-addon-legacy-855804/client.crt: no such file or directory
E1024 19:23:44.560962   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/addons-903896/client.crt: no such file or directory
E1024 19:24:26.867161   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/functional-280129/client.crt: no such file or directory
E1024 19:25:07.610080   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/addons-903896/client.crt: no such file or directory
multinode_test.go:295: (dbg) Done: out/minikube-linux-amd64 start -p multinode-313452 --wait=true -v=8 --alsologtostderr: (2m26.482150088s)
multinode_test.go:300: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-313452
--- PASS: TestMultiNode/serial/RestartKeepsNodes (174.43s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (1.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:394: (dbg) Run:  out/minikube-linux-amd64 -p multinode-313452 node delete m03
multinode_test.go:394: (dbg) Done: out/minikube-linux-amd64 -p multinode-313452 node delete m03: (1.194803609s)
multinode_test.go:400: (dbg) Run:  out/minikube-linux-amd64 -p multinode-313452 status --alsologtostderr
multinode_test.go:424: (dbg) Run:  kubectl get nodes
multinode_test.go:432: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (1.73s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (25.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p multinode-313452 stop
multinode_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p multinode-313452 stop: (25.482969627s)
multinode_test.go:320: (dbg) Run:  out/minikube-linux-amd64 -p multinode-313452 status
multinode_test.go:320: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-313452 status: exit status 7 (92.204561ms)

                                                
                                                
-- stdout --
	multinode-313452
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-313452-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:327: (dbg) Run:  out/minikube-linux-amd64 -p multinode-313452 status --alsologtostderr
multinode_test.go:327: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-313452 status --alsologtostderr: exit status 7 (92.419116ms)

                                                
                                                
-- stdout --
	multinode-313452
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-313452-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1024 19:25:46.114872   33280 out.go:296] Setting OutFile to fd 1 ...
	I1024 19:25:46.115141   33280 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 19:25:46.115150   33280 out.go:309] Setting ErrFile to fd 2...
	I1024 19:25:46.115157   33280 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1024 19:25:46.115334   33280 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17485-9104/.minikube/bin
	I1024 19:25:46.115562   33280 out.go:303] Setting JSON to false
	I1024 19:25:46.115598   33280 mustload.go:65] Loading cluster: multinode-313452
	I1024 19:25:46.115680   33280 notify.go:220] Checking for updates...
	I1024 19:25:46.116029   33280 config.go:182] Loaded profile config "multinode-313452": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1024 19:25:46.116045   33280 status.go:255] checking status of multinode-313452 ...
	I1024 19:25:46.116457   33280 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1024 19:25:46.116537   33280 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:25:46.133057   33280 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45745
	I1024 19:25:46.133455   33280 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:25:46.133958   33280 main.go:141] libmachine: Using API Version  1
	I1024 19:25:46.133980   33280 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:25:46.134280   33280 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:25:46.134454   33280 main.go:141] libmachine: (multinode-313452) Calling .GetState
	I1024 19:25:46.135916   33280 status.go:330] multinode-313452 host status = "Stopped" (err=<nil>)
	I1024 19:25:46.135937   33280 status.go:343] host is not running, skipping remaining checks
	I1024 19:25:46.135943   33280 status.go:257] multinode-313452 status: &{Name:multinode-313452 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1024 19:25:46.135981   33280 status.go:255] checking status of multinode-313452-m02 ...
	I1024 19:25:46.136256   33280 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1024 19:25:46.136307   33280 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1024 19:25:46.150377   33280 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41367
	I1024 19:25:46.150721   33280 main.go:141] libmachine: () Calling .GetVersion
	I1024 19:25:46.151129   33280 main.go:141] libmachine: Using API Version  1
	I1024 19:25:46.151153   33280 main.go:141] libmachine: () Calling .SetConfigRaw
	I1024 19:25:46.151438   33280 main.go:141] libmachine: () Calling .GetMachineName
	I1024 19:25:46.151579   33280 main.go:141] libmachine: (multinode-313452-m02) Calling .GetState
	I1024 19:25:46.152796   33280 status.go:330] multinode-313452-m02 host status = "Stopped" (err=<nil>)
	I1024 19:25:46.152811   33280 status.go:343] host is not running, skipping remaining checks
	I1024 19:25:46.152819   33280 status.go:257] multinode-313452-m02 status: &{Name:multinode-313452-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (25.67s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (135.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:354: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-313452 --wait=true -v=8 --alsologtostderr --driver=kvm2 
multinode_test.go:354: (dbg) Done: out/minikube-linux-amd64 start -p multinode-313452 --wait=true -v=8 --alsologtostderr --driver=kvm2 : (2m15.108155243s)
multinode_test.go:360: (dbg) Run:  out/minikube-linux-amd64 -p multinode-313452 status --alsologtostderr
E1024 19:28:01.370451   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/ingress-addon-legacy-855804/client.crt: no such file or directory
multinode_test.go:374: (dbg) Run:  kubectl get nodes
multinode_test.go:382: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (135.67s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (52.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:443: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-313452
multinode_test.go:452: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-313452-m02 --driver=kvm2 
multinode_test.go:452: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-313452-m02 --driver=kvm2 : exit status 14 (74.419513ms)

                                                
                                                
-- stdout --
	* [multinode-313452-m02] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17485
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17485-9104/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17485-9104/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-313452-m02' is duplicated with machine name 'multinode-313452-m02' in profile 'multinode-313452'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:460: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-313452-m03 --driver=kvm2 
E1024 19:28:44.562561   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/addons-903896/client.crt: no such file or directory
multinode_test.go:460: (dbg) Done: out/minikube-linux-amd64 start -p multinode-313452-m03 --driver=kvm2 : (50.678095419s)
multinode_test.go:467: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-313452
multinode_test.go:467: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-313452: exit status 80 (252.564646ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-313452
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-313452-m03 already exists in multinode-313452-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-313452-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (52.05s)

                                                
                                    
x
+
TestPreload (176.11s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-164799 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.24.4
E1024 19:29:26.868108   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/functional-280129/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-164799 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.24.4: (1m30.149704075s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-164799 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-164799 image pull gcr.io/k8s-minikube/busybox: (1.313157711s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-164799
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-164799: (13.112608919s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-164799 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2 
E1024 19:30:49.912101   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/functional-280129/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-164799 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2 : (1m10.273396005s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-164799 image list
helpers_test.go:175: Cleaning up "test-preload-164799" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-164799
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-164799: (1.052089941s)
--- PASS: TestPreload (176.11s)

                                                
                                    
x
+
TestScheduledStopUnix (124.98s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-266293 --memory=2048 --driver=kvm2 
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-266293 --memory=2048 --driver=kvm2 : (53.209765886s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-266293 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-266293 -n scheduled-stop-266293
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-266293 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-266293 --cancel-scheduled
E1024 19:33:01.369769   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/ingress-addon-legacy-855804/client.crt: no such file or directory
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-266293 -n scheduled-stop-266293
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-266293
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-266293 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E1024 19:33:44.563126   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/addons-903896/client.crt: no such file or directory
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-266293
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-266293: exit status 7 (74.392841ms)

                                                
                                                
-- stdout --
	scheduled-stop-266293
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-266293 -n scheduled-stop-266293
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-266293 -n scheduled-stop-266293: exit status 7 (75.025842ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-266293" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-266293
--- PASS: TestScheduledStopUnix (124.98s)

                                                
                                    
x
+
TestSkaffold (139.22s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe3762823066 version
skaffold_test.go:63: skaffold version: v2.8.0
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p skaffold-807036 --memory=2600 --driver=kvm2 
E1024 19:34:24.415334   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/ingress-addon-legacy-855804/client.crt: no such file or directory
E1024 19:34:26.868104   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/functional-280129/client.crt: no such file or directory
skaffold_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p skaffold-807036 --memory=2600 --driver=kvm2 : (49.951232588s)
skaffold_test.go:86: copying out/minikube-linux-amd64 to /home/jenkins/workspace/KVM_Linux_integration/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe3762823066 run --minikube-profile skaffold-807036 --kube-context skaffold-807036 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe3762823066 run --minikube-profile skaffold-807036 --kube-context skaffold-807036 --status-check=true --port-forward=false --interactive=false: (1m17.15397877s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-677f84c5bc-qhv2n" [c7cdd4c0-ae7b-424c-a573-59de38916865] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 5.016806669s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-7dcfb49b86-6q2nl" [b4ec895e-b63c-4144-a2f5-f91fc78581aa] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.010793237s
helpers_test.go:175: Cleaning up "skaffold-807036" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p skaffold-807036
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p skaffold-807036: (1.158447355s)
--- PASS: TestSkaffold (139.22s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (190.55s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:133: (dbg) Run:  /tmp/minikube-v1.6.2.1895094821.exe start -p running-upgrade-844320 --memory=2200 --vm-driver=kvm2 
version_upgrade_test.go:133: (dbg) Done: /tmp/minikube-v1.6.2.1895094821.exe start -p running-upgrade-844320 --memory=2200 --vm-driver=kvm2 : (1m52.474882842s)
version_upgrade_test.go:143: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-844320 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:143: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-844320 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 : (1m16.153576503s)
helpers_test.go:175: Cleaning up "running-upgrade-844320" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-844320
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-844320: (1.484963628s)
--- PASS: TestRunningBinaryUpgrade (190.55s)

                                                
                                    
x
+
TestKubernetesUpgrade (208.56s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:235: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-921038 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:235: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-921038 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2 : (1m14.17410678s)
version_upgrade_test.go:240: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-921038
version_upgrade_test.go:240: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-921038: (3.167081777s)
version_upgrade_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-921038 status --format={{.Host}}
version_upgrade_test.go:245: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-921038 status --format={{.Host}}: exit status 7 (105.191559ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:247: status error: exit status 7 (may be ok)
version_upgrade_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-921038 --memory=2200 --kubernetes-version=v1.28.3 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-921038 --memory=2200 --kubernetes-version=v1.28.3 --alsologtostderr -v=1 --driver=kvm2 : (49.526849929s)
version_upgrade_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-921038 version --output=json
version_upgrade_test.go:280: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:282: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-921038 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2 
version_upgrade_test.go:282: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-921038 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2 : exit status 106 (100.116286ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-921038] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17485
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17485-9104/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17485-9104/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.28.3 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-921038
	    minikube start -p kubernetes-upgrade-921038 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-9210382 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.28.3, by running:
	    
	    minikube start -p kubernetes-upgrade-921038 --kubernetes-version=v1.28.3
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:286: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:288: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-921038 --memory=2200 --kubernetes-version=v1.28.3 --alsologtostderr -v=1 --driver=kvm2 
E1024 19:38:44.560935   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/addons-903896/client.crt: no such file or directory
version_upgrade_test.go:288: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-921038 --memory=2200 --kubernetes-version=v1.28.3 --alsologtostderr -v=1 --driver=kvm2 : (1m20.020854037s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-921038" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-921038
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-921038: (1.408155449s)
--- PASS: TestKubernetesUpgrade (208.56s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.52s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.52s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (3.21s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:219: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-130425
version_upgrade_test.go:219: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-130425: (3.214390125s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (3.21s)

                                                
                                    
x
+
TestPause/serial/Start (121.99s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-981066 --memory=2048 --install-addons=false --wait=all --driver=kvm2 
E1024 19:39:26.867909   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/functional-280129/client.crt: no such file or directory
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-981066 --memory=2048 --install-addons=false --wait=all --driver=kvm2 : (2m1.994106174s)
--- PASS: TestPause/serial/Start (121.99s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (44.68s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-981066 --alsologtostderr -v=1 --driver=kvm2 
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-981066 --alsologtostderr -v=1 --driver=kvm2 : (44.649944959s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (44.68s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-832174 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-832174 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2 : exit status 14 (79.022103ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-832174] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17485
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17485-9104/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17485-9104/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (62.03s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-832174 --driver=kvm2 
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-832174 --driver=kvm2 : (1m1.714462087s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-832174 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (62.03s)

                                                
                                    
x
+
TestPause/serial/Pause (0.62s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-981066 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.62s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.26s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-981066 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-981066 --output=json --layout=cluster: exit status 2 (262.462643ms)

                                                
                                                
-- stdout --
	{"Name":"pause-981066","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.31.2","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-981066","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.26s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.56s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-981066 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.56s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.72s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-981066 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.72s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.13s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-981066 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-981066 --alsologtostderr -v=5: (1.130545882s)
--- PASS: TestPause/serial/DeletePaused (1.13s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (3.32s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.324224375s)
--- PASS: TestPause/serial/VerifyDeletedResources (3.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (31.75s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-832174 --no-kubernetes --driver=kvm2 
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-832174 --no-kubernetes --driver=kvm2 : (30.387913687s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-832174 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-832174 status -o json: exit status 2 (256.244782ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-832174","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-832174
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-832174: (1.109875254s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (31.75s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (28.93s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-832174 --no-kubernetes --driver=kvm2 
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-832174 --no-kubernetes --driver=kvm2 : (28.927110406s)
--- PASS: TestNoKubernetes/serial/Start (28.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (126.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-014827 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2 
E1024 19:43:48.577254   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/skaffold-807036/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-014827 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2 : (2m6.89143267s)
--- PASS: TestNetworkPlugins/group/auto/Start (126.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (114.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-014827 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-014827 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2 : (1m54.902599703s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (114.90s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-832174 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-832174 "sudo systemctl is-active --quiet service kubelet": exit status 1 (216.224391ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.74s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.74s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.15s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-832174
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-832174: (2.151715979s)
--- PASS: TestNoKubernetes/serial/Stop (2.15s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (79.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-832174 --driver=kvm2 
E1024 19:44:26.867595   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/functional-280129/client.crt: no such file or directory
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-832174 --driver=kvm2 : (1m19.237101892s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (79.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-832174 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-832174 "sudo systemctl is-active --quiet service kubelet": exit status 1 (231.017638ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (107.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-014827 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-014827 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2 : (1m47.776587238s)
--- PASS: TestNetworkPlugins/group/calico/Start (107.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-014827 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (12.59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-014827 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-5qp8t" [36dea74d-c08f-4d62-884c-84bcf2465d81] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1024 19:45:54.078295   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/gvisor-003752/client.crt: no such file or directory
E1024 19:45:54.083587   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/gvisor-003752/client.crt: no such file or directory
E1024 19:45:54.093896   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/gvisor-003752/client.crt: no such file or directory
E1024 19:45:54.114213   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/gvisor-003752/client.crt: no such file or directory
E1024 19:45:54.154527   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/gvisor-003752/client.crt: no such file or directory
E1024 19:45:54.234958   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/gvisor-003752/client.crt: no such file or directory
E1024 19:45:54.395476   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/gvisor-003752/client.crt: no such file or directory
E1024 19:45:54.715786   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/gvisor-003752/client.crt: no such file or directory
E1024 19:45:55.356871   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/gvisor-003752/client.crt: no such file or directory
E1024 19:45:56.637829   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/gvisor-003752/client.crt: no such file or directory
E1024 19:45:59.198982   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/gvisor-003752/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-5qp8t" [36dea74d-c08f-4d62-884c-84bcf2465d81] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 12.017530148s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (12.59s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-rdvzc" [3b94a1fe-69f9-4120-98d7-824dbb4aeb41] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.022491328s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (90.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-014827 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2 
E1024 19:46:04.320129   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/gvisor-003752/client.crt: no such file or directory
E1024 19:46:04.732427   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/skaffold-807036/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-014827 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2 : (1m30.008878392s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (90.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-014827 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-014827 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-014827 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-014827 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (14.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-014827 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-xdjgk" [1c1ded9f-3289-4045-a69f-9ff711e17e29] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1024 19:46:14.561059   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/gvisor-003752/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-xdjgk" [1c1ded9f-3289-4045-a69f-9ff711e17e29] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 14.012926176s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (14.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-014827 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-014827 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-014827 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (94.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p false-014827 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=kvm2 
E1024 19:46:32.417688   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/skaffold-807036/client.crt: no such file or directory
E1024 19:46:35.041398   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/gvisor-003752/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p false-014827 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=kvm2 : (1m34.858107182s)
--- PASS: TestNetworkPlugins/group/false/Start (94.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (108.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-014827 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2 
E1024 19:47:16.002469   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/gvisor-003752/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-014827 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2 : (1m48.805972874s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (108.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-z4stl" [3a184d14-cd8f-41ea-86cc-15eff0bea0d4] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.029365782s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-014827 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-014827 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-jb2s5" [265b60f0-d75c-44ce-81b8-1d538a198937] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1024 19:47:29.912844   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/functional-280129/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-jb2s5" [265b60f0-d75c-44ce-81b8-1d538a198937] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.017858687s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-014827 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (12.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-014827 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-l5gr5" [8c743179-5737-4d48-b487-51eda8d8b5be] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-l5gr5" [8c743179-5737-4d48-b487-51eda8d8b5be] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 12.013876189s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (12.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-014827 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-014827 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-014827 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-014827 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-014827 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-014827 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p false-014827 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (14.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-014827 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-bhgxx" [16cad29a-bd79-454c-8327-77e6474e511b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-bhgxx" [16cad29a-bd79-454c-8327-77e6474e511b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 14.013512627s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (14.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (84.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-014827 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-014827 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2 : (1m24.111667671s)
--- PASS: TestNetworkPlugins/group/flannel/Start (84.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (104.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-014827 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-014827 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2 : (1m44.554212342s)
--- PASS: TestNetworkPlugins/group/bridge/Start (104.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-014827 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-014827 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-014827 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-014827 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-014827 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-sbtxh" [6818a14d-1f6f-4cbd-bcd6-47688aacf734] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-sbtxh" [6818a14d-1f6f-4cbd-bcd6-47688aacf734] Running
E1024 19:48:37.922672   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/gvisor-003752/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.124751266s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (114.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kubenet-014827 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kubenet-014827 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=kvm2 : (1m54.153009978s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (114.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-014827 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-014827 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-014827 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (179.34s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-531596 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-531596 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.16.0: (2m59.335121913s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (179.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-rxl6w" [1b6281f2-6b64-4470-833a-7253c0c00039] Running
E1024 19:49:26.867361   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/functional-280129/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.023276337s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-014827 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (12.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-014827 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-tm28p" [a3711db2-d066-41fe-8fa2-732bf8e8ae94] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-tm28p" [a3711db2-d066-41fe-8fa2-732bf8e8ae94] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 12.03143478s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (12.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-014827 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-014827 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-014827 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-014827 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (13.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-014827 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-skdmn" [2833287a-893e-4b07-bbbc-58ecb7c5beb7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-skdmn" [2833287a-893e-4b07-bbbc-58ecb7c5beb7] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 13.014675729s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (13.45s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (104.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-301948 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.28.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-301948 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.28.3: (1m44.38006346s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (104.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-014827 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-014827 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-014827 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kubenet-014827 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (11.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-014827 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-ttn79" [6144dbcc-ca6d-4442-8f4f-d4fd6ecdf2c7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-ttn79" [6144dbcc-ca6d-4442-8f4f-d4fd6ecdf2c7] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 11.013961269s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (11.55s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (123.78s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-585475 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.28.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-585475 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.28.3: (2m3.781400445s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (123.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-014827 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-014827 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-014827 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.15s)
E1024 19:56:13.455257   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/enable-default-cni-014827/client.crt: no such file or directory
E1024 19:56:13.908577   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/bridge-014827/client.crt: no such file or directory
E1024 19:56:20.538380   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/auto-014827/client.crt: no such file or directory
E1024 19:56:29.870131   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/kindnet-014827/client.crt: no such file or directory
E1024 19:56:47.452023   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/kubenet-014827/client.crt: no such file or directory
E1024 19:57:10.092191   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/flannel-014827/client.crt: no such file or directory
E1024 19:57:22.482384   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/calico-014827/client.crt: no such file or directory
E1024 19:57:27.778731   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/skaffold-807036/client.crt: no such file or directory
E1024 19:57:33.380355   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/custom-flannel-014827/client.crt: no such file or directory
E1024 19:57:35.828746   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/bridge-014827/client.crt: no such file or directory

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (119.69s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-744739 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.28.3
E1024 19:50:57.973705   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/auto-014827/client.crt: no such file or directory
E1024 19:51:02.185716   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/kindnet-014827/client.crt: no such file or directory
E1024 19:51:02.190988   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/kindnet-014827/client.crt: no such file or directory
E1024 19:51:02.201290   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/kindnet-014827/client.crt: no such file or directory
E1024 19:51:02.221605   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/kindnet-014827/client.crt: no such file or directory
E1024 19:51:02.261878   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/kindnet-014827/client.crt: no such file or directory
E1024 19:51:02.342191   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/kindnet-014827/client.crt: no such file or directory
E1024 19:51:02.502705   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/kindnet-014827/client.crt: no such file or directory
E1024 19:51:02.823866   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/kindnet-014827/client.crt: no such file or directory
E1024 19:51:03.094531   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/auto-014827/client.crt: no such file or directory
E1024 19:51:03.464365   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/kindnet-014827/client.crt: no such file or directory
E1024 19:51:04.416289   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/ingress-addon-legacy-855804/client.crt: no such file or directory
E1024 19:51:04.732448   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/skaffold-807036/client.crt: no such file or directory
E1024 19:51:04.744556   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/kindnet-014827/client.crt: no such file or directory
E1024 19:51:07.305441   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/kindnet-014827/client.crt: no such file or directory
E1024 19:51:12.426242   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/kindnet-014827/client.crt: no such file or directory
E1024 19:51:13.335286   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/auto-014827/client.crt: no such file or directory
E1024 19:51:21.763482   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/gvisor-003752/client.crt: no such file or directory
E1024 19:51:22.667264   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/kindnet-014827/client.crt: no such file or directory
E1024 19:51:33.816195   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/auto-014827/client.crt: no such file or directory
E1024 19:51:43.147746   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/kindnet-014827/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-744739 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.28.3: (1m59.693785012s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (119.69s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.56s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-301948 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [e8aa189c-6e28-438b-929a-7d04d265f449] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [e8aa189c-6e28-438b-929a-7d04d265f449] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.029530031s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-301948 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.56s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.44s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-301948 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-301948 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.334499897s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-301948 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.44s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.51s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-531596 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [0e222f55-88b8-47a0-b641-d9f49b396378] Pending
helpers_test.go:344: "busybox" [0e222f55-88b8-47a0-b641-d9f49b396378] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [0e222f55-88b8-47a0-b641-d9f49b396378] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.038576849s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-531596 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.51s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (13.16s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-301948 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-301948 --alsologtostderr -v=3: (13.159850396s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (13.16s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.86s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-531596 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-531596 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.86s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (13.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-531596 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-531596 --alsologtostderr -v=3: (13.134303751s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (13.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-301948 -n no-preload-301948
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-301948 -n no-preload-301948: exit status 7 (74.683655ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-301948 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (335.32s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-301948 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.28.3
E1024 19:52:14.776750   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/auto-014827/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-301948 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.28.3: (5m34.943558532s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-301948 -n no-preload-301948
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (335.32s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-531596 -n old-k8s-version-531596
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-531596 -n old-k8s-version-531596: exit status 7 (88.937241ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-531596 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (477.32s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-531596 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.16.0
E1024 19:52:22.482299   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/calico-014827/client.crt: no such file or directory
E1024 19:52:22.487587   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/calico-014827/client.crt: no such file or directory
E1024 19:52:22.497963   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/calico-014827/client.crt: no such file or directory
E1024 19:52:22.518305   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/calico-014827/client.crt: no such file or directory
E1024 19:52:22.558634   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/calico-014827/client.crt: no such file or directory
E1024 19:52:22.638768   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/calico-014827/client.crt: no such file or directory
E1024 19:52:22.799064   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/calico-014827/client.crt: no such file or directory
E1024 19:52:23.119806   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/calico-014827/client.crt: no such file or directory
E1024 19:52:23.760805   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/calico-014827/client.crt: no such file or directory
E1024 19:52:24.108256   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/kindnet-014827/client.crt: no such file or directory
E1024 19:52:25.041470   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/calico-014827/client.crt: no such file or directory
E1024 19:52:27.602483   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/calico-014827/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-531596 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.16.0: (7m57.053206268s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-531596 -n old-k8s-version-531596
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (477.32s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.44s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-585475 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [2b95184c-906f-4e5f-9d02-99478a01e520] Pending
helpers_test.go:344: "busybox" [2b95184c-906f-4e5f-9d02-99478a01e520] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1024 19:52:32.723437   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/calico-014827/client.crt: no such file or directory
helpers_test.go:344: "busybox" [2b95184c-906f-4e5f-9d02-99478a01e520] Running
E1024 19:52:33.380099   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/custom-flannel-014827/client.crt: no such file or directory
E1024 19:52:33.385365   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/custom-flannel-014827/client.crt: no such file or directory
E1024 19:52:33.395638   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/custom-flannel-014827/client.crt: no such file or directory
E1024 19:52:33.415947   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/custom-flannel-014827/client.crt: no such file or directory
E1024 19:52:33.456685   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/custom-flannel-014827/client.crt: no such file or directory
E1024 19:52:33.537046   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/custom-flannel-014827/client.crt: no such file or directory
E1024 19:52:33.698111   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/custom-flannel-014827/client.crt: no such file or directory
E1024 19:52:34.018285   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/custom-flannel-014827/client.crt: no such file or directory
E1024 19:52:34.659134   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/custom-flannel-014827/client.crt: no such file or directory
E1024 19:52:35.939676   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/custom-flannel-014827/client.crt: no such file or directory
E1024 19:52:38.500701   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/custom-flannel-014827/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.025518456s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-585475 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.44s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-585475 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-585475 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.084796324s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-585475 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (13.14s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-585475 --alsologtostderr -v=3
E1024 19:52:42.964001   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/calico-014827/client.crt: no such file or directory
E1024 19:52:43.621427   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/custom-flannel-014827/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-585475 --alsologtostderr -v=3: (13.136584058s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (13.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-585475 -n embed-certs-585475
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-585475 -n embed-certs-585475: exit status 7 (93.491924ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-585475 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E1024 19:52:53.862154   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/custom-flannel-014827/client.crt: no such file or directory
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (336.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-585475 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.28.3
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-585475 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.28.3: (5m35.914693836s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-585475 -n embed-certs-585475
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (336.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-744739 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [e216a770-60b5-4c5d-bca0-ff3febb9a319] Pending
helpers_test.go:344: "busybox" [e216a770-60b5-4c5d-bca0-ff3febb9a319] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1024 19:52:59.100563   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/false-014827/client.crt: no such file or directory
E1024 19:52:59.105872   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/false-014827/client.crt: no such file or directory
E1024 19:52:59.116254   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/false-014827/client.crt: no such file or directory
E1024 19:52:59.136596   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/false-014827/client.crt: no such file or directory
E1024 19:52:59.176870   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/false-014827/client.crt: no such file or directory
E1024 19:52:59.257230   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/false-014827/client.crt: no such file or directory
E1024 19:52:59.417645   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/false-014827/client.crt: no such file or directory
E1024 19:52:59.738305   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/false-014827/client.crt: no such file or directory
helpers_test.go:344: "busybox" [e216a770-60b5-4c5d-bca0-ff3febb9a319] Running
E1024 19:53:00.378901   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/false-014827/client.crt: no such file or directory
E1024 19:53:01.369787   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/ingress-addon-legacy-855804/client.crt: no such file or directory
E1024 19:53:01.659109   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/false-014827/client.crt: no such file or directory
E1024 19:53:03.444344   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/calico-014827/client.crt: no such file or directory
E1024 19:53:04.219932   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/false-014827/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.037313511s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-744739 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.60s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.39s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-744739 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-744739 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.301332272s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-744739 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.39s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (13.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-744739 --alsologtostderr -v=3
E1024 19:53:09.341058   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/false-014827/client.crt: no such file or directory
E1024 19:53:14.342578   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/custom-flannel-014827/client.crt: no such file or directory
E1024 19:53:19.581985   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/false-014827/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-744739 --alsologtostderr -v=3: (13.137548496s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (13.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-744739 -n default-k8s-diff-port-744739
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-744739 -n default-k8s-diff-port-744739: exit status 7 (79.056163ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-744739 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (75.29s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-468999 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.28.3
E1024 19:53:55.303231   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/custom-flannel-014827/client.crt: no such file or directory
E1024 19:54:10.573128   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/enable-default-cni-014827/client.crt: no such file or directory
E1024 19:54:21.023878   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/false-014827/client.crt: no such file or directory
E1024 19:54:26.249919   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/flannel-014827/client.crt: no such file or directory
E1024 19:54:26.255226   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/flannel-014827/client.crt: no such file or directory
E1024 19:54:26.265473   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/flannel-014827/client.crt: no such file or directory
E1024 19:54:26.285809   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/flannel-014827/client.crt: no such file or directory
E1024 19:54:26.326147   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/flannel-014827/client.crt: no such file or directory
E1024 19:54:26.406499   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/flannel-014827/client.crt: no such file or directory
E1024 19:54:26.566900   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/flannel-014827/client.crt: no such file or directory
E1024 19:54:26.867307   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/functional-280129/client.crt: no such file or directory
E1024 19:54:26.887631   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/flannel-014827/client.crt: no such file or directory
E1024 19:54:27.527839   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/flannel-014827/client.crt: no such file or directory
E1024 19:54:28.808099   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/flannel-014827/client.crt: no such file or directory
E1024 19:54:31.368592   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/flannel-014827/client.crt: no such file or directory
E1024 19:54:36.489774   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/flannel-014827/client.crt: no such file or directory
E1024 19:54:46.730387   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/flannel-014827/client.crt: no such file or directory
E1024 19:54:51.534171   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/enable-default-cni-014827/client.crt: no such file or directory
E1024 19:54:51.986129   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/bridge-014827/client.crt: no such file or directory
E1024 19:54:51.991414   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/bridge-014827/client.crt: no such file or directory
E1024 19:54:52.001687   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/bridge-014827/client.crt: no such file or directory
E1024 19:54:52.022000   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/bridge-014827/client.crt: no such file or directory
E1024 19:54:52.062320   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/bridge-014827/client.crt: no such file or directory
E1024 19:54:52.142695   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/bridge-014827/client.crt: no such file or directory
E1024 19:54:52.303216   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/bridge-014827/client.crt: no such file or directory
E1024 19:54:52.623827   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/bridge-014827/client.crt: no such file or directory
E1024 19:54:53.264722   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/bridge-014827/client.crt: no such file or directory
E1024 19:54:54.545637   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/bridge-014827/client.crt: no such file or directory
E1024 19:54:57.105920   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/bridge-014827/client.crt: no such file or directory
E1024 19:55:02.226423   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/bridge-014827/client.crt: no such file or directory
E1024 19:55:06.325432   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/calico-014827/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-468999 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.28.3: (1m15.290636163s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (75.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.11s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-468999 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1024 19:55:07.210649   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/flannel-014827/client.crt: no such file or directory
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-468999 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.113656598s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (8.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-468999 --alsologtostderr -v=3
E1024 19:55:12.467590   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/bridge-014827/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-468999 --alsologtostderr -v=3: (8.124127231s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (8.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-468999 -n newest-cni-468999
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-468999 -n newest-cni-468999: exit status 7 (86.223135ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-468999 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (50.54s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-468999 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.28.3
E1024 19:55:17.223721   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/custom-flannel-014827/client.crt: no such file or directory
E1024 19:55:25.528837   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/kubenet-014827/client.crt: no such file or directory
E1024 19:55:25.534123   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/kubenet-014827/client.crt: no such file or directory
E1024 19:55:25.544424   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/kubenet-014827/client.crt: no such file or directory
E1024 19:55:25.564776   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/kubenet-014827/client.crt: no such file or directory
E1024 19:55:25.605401   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/kubenet-014827/client.crt: no such file or directory
E1024 19:55:25.685924   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/kubenet-014827/client.crt: no such file or directory
E1024 19:55:25.846402   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/kubenet-014827/client.crt: no such file or directory
E1024 19:55:26.167382   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/kubenet-014827/client.crt: no such file or directory
E1024 19:55:26.807619   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/kubenet-014827/client.crt: no such file or directory
E1024 19:55:28.088069   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/kubenet-014827/client.crt: no such file or directory
E1024 19:55:30.648995   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/kubenet-014827/client.crt: no such file or directory
E1024 19:55:32.947759   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/bridge-014827/client.crt: no such file or directory
E1024 19:55:35.769713   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/kubenet-014827/client.crt: no such file or directory
E1024 19:55:42.944985   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/false-014827/client.crt: no such file or directory
E1024 19:55:46.010455   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/kubenet-014827/client.crt: no such file or directory
E1024 19:55:48.171233   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/flannel-014827/client.crt: no such file or directory
E1024 19:55:52.843262   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/auto-014827/client.crt: no such file or directory
E1024 19:55:54.077738   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/gvisor-003752/client.crt: no such file or directory
E1024 19:56:02.185924   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/kindnet-014827/client.crt: no such file or directory
E1024 19:56:04.732108   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/skaffold-807036/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-468999 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.28.3: (50.245805923s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-468999 -n newest-cni-468999
E1024 19:56:06.491366   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/kubenet-014827/client.crt: no such file or directory
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (50.54s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p newest-cni-468999 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.76s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-468999 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-468999 -n newest-cni-468999
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-468999 -n newest-cni-468999: exit status 2 (284.177526ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-468999 -n newest-cni-468999
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-468999 -n newest-cni-468999: exit status 2 (276.075786ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-468999 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-468999 -n newest-cni-468999
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-468999 -n newest-cni-468999
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.76s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (21.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-xbjkt" [c07c4622-b3b4-407d-9e68-442d603fb896] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E1024 19:57:50.165989   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/calico-014827/client.crt: no such file or directory
E1024 19:57:56.248337   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/default-k8s-diff-port-744739/client.crt: no such file or directory
E1024 19:57:56.253655   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/default-k8s-diff-port-744739/client.crt: no such file or directory
E1024 19:57:56.263979   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/default-k8s-diff-port-744739/client.crt: no such file or directory
E1024 19:57:56.284296   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/default-k8s-diff-port-744739/client.crt: no such file or directory
E1024 19:57:56.324647   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/default-k8s-diff-port-744739/client.crt: no such file or directory
E1024 19:57:56.405016   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/default-k8s-diff-port-744739/client.crt: no such file or directory
E1024 19:57:56.565427   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/default-k8s-diff-port-744739/client.crt: no such file or directory
E1024 19:57:56.886132   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/default-k8s-diff-port-744739/client.crt: no such file or directory
E1024 19:57:57.527120   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/default-k8s-diff-port-744739/client.crt: no such file or directory
E1024 19:57:58.807373   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/default-k8s-diff-port-744739/client.crt: no such file or directory
E1024 19:57:59.100213   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/false-014827/client.crt: no such file or directory
E1024 19:58:01.064977   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/custom-flannel-014827/client.crt: no such file or directory
E1024 19:58:01.367912   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/default-k8s-diff-port-744739/client.crt: no such file or directory
E1024 19:58:01.370083   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/ingress-addon-legacy-855804/client.crt: no such file or directory
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-xbjkt" [c07c4622-b3b4-407d-9e68-442d603fb896] Running
E1024 19:58:06.488111   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/default-k8s-diff-port-744739/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 21.023411627s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (21.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-xbjkt" [c07c4622-b3b4-407d-9e68-442d603fb896] Running
E1024 19:58:09.373217   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/kubenet-014827/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.01434465s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-301948 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p no-preload-301948 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.61s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-301948 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-301948 -n no-preload-301948
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-301948 -n no-preload-301948: exit status 2 (267.931035ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-301948 -n no-preload-301948
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-301948 -n no-preload-301948: exit status 2 (260.507784ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-301948 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-301948 -n no-preload-301948
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-301948 -n no-preload-301948
E1024 19:58:16.729180   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/default-k8s-diff-port-744739/client.crt: no such file or directory
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.61s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (16.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-x29g2" [32b5f481-76f2-448e-b7ed-e9e420475dd8] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E1024 19:58:37.210261   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/default-k8s-diff-port-744739/client.crt: no such file or directory
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-x29g2" [32b5f481-76f2-448e-b7ed-e9e420475dd8] Running
E1024 19:58:44.561594   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/addons-903896/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 16.021858021s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (16.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-x29g2" [32b5f481-76f2-448e-b7ed-e9e420475dd8] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.015789103s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-585475 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p embed-certs-585475 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.71s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-585475 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-585475 -n embed-certs-585475
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-585475 -n embed-certs-585475: exit status 2 (280.757191ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-585475 -n embed-certs-585475
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-585475 -n embed-certs-585475: exit status 2 (287.70386ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-585475 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-585475 -n embed-certs-585475
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-585475 -n embed-certs-585475
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.71s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-vv7dw" [7f908da8-db9e-4f8a-a3b3-843575ba8806] Running
E1024 20:00:19.668914   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/bridge-014827/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.017464266s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-vv7dw" [7f908da8-db9e-4f8a-a3b3-843575ba8806] Running
E1024 20:00:25.528935   16443 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17485-9104/.minikube/profiles/kubenet-014827/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.009799353s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-531596 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.43s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-531596 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-531596 -n old-k8s-version-531596
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-531596 -n old-k8s-version-531596: exit status 2 (240.447601ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-531596 -n old-k8s-version-531596
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-531596 -n old-k8s-version-531596: exit status 2 (242.415059ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-531596 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-531596 -n old-k8s-version-531596
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-531596 -n old-k8s-version-531596
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.43s)

                                                
                                    

Test skip (31/321)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:213: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:497: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:297: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:523: 
----------------------- debugLogs start: cilium-014827 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-014827

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-014827

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-014827

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-014827

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-014827

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-014827

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-014827

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-014827

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-014827

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-014827

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-014827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-014827"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-014827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-014827"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-014827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-014827"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-014827

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-014827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-014827"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-014827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-014827"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-014827" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-014827" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-014827" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-014827" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-014827" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-014827" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-014827" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-014827" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-014827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-014827"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-014827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-014827"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-014827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-014827"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-014827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-014827"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-014827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-014827"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-014827

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-014827

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-014827" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-014827" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-014827

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-014827

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-014827" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-014827" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-014827" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-014827" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-014827" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-014827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-014827"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-014827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-014827"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-014827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-014827"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-014827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-014827"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-014827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-014827"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-014827

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-014827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-014827"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-014827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-014827"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-014827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-014827"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-014827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-014827"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-014827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-014827"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-014827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-014827"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-014827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-014827"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-014827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-014827"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-014827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-014827"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-014827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-014827"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-014827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-014827"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-014827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-014827"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-014827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-014827"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-014827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-014827"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-014827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-014827"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-014827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-014827"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-014827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-014827"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-014827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-014827"

                                                
                                                
----------------------- debugLogs end: cilium-014827 [took: 4.041569507s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-014827" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-014827
--- SKIP: TestNetworkPlugins/group/cilium (4.24s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-874534" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-874534
--- SKIP: TestStartStop/group/disable-driver-mounts (0.20s)

                                                
                                    
Copied to clipboard