Test Report: KVM_Linux 17488

                    
                      292152b7ba2fff47063f7712cda18987a57d80fb:2023-10-25:31605
                    
                

Test fail (2/321)

Order failed test Duration
250 TestStoppedBinaryUpgrade/Upgrade 1037.86
384 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 1.97
x
+
TestStoppedBinaryUpgrade/Upgrade (1037.86s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:196: (dbg) Run:  /tmp/minikube-v1.6.2.673550539.exe start -p stopped-upgrade-634233 --memory=2200 --vm-driver=kvm2 
version_upgrade_test.go:196: (dbg) Non-zero exit: /tmp/minikube-v1.6.2.673550539.exe start -p stopped-upgrade-634233 --memory=2200 --vm-driver=kvm2 : exit status 70 (4.791563606s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-634233] minikube v1.6.2 on Ubuntu 20.04
	  - MINIKUBE_LOCATION=17488
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17488-80960/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	  - KUBECONFIG=/tmp/legacy_kubeconfig1356966743
	* Selecting 'kvm2' driver from user configuration (alternates: [none])
	* Downloading VM boot image ...

                                                
                                                
-- /stdout --
** stderr ** 
	
	! 'kvm2' driver reported an issue: /usr/bin/virsh domcapabilities --virttype kvm failed:
	error: failed to get emulator capabilities
	error: invalid argument: KVM is not supported by '/usr/bin/qemu-system-x86_64' on this host
	* Suggestion: Follow your Linux distribution instructions for configuring KVM
	* Documentation: https://minikube.sigs.k8s.io/docs/reference/drivers/kvm2/
	
	    > minikube-v1.6.0.iso.sha256: 65 B / 65 B [--------------] 100.00% ? p/s 0s    > minikube-v1.6.0.iso: 43.72 MiB / 150.93 MiB [--->________] 28.97% ? p/s ?    > minikube-v1.6.0.iso: 43.87 MiB / 150.93 MiB [--->________] 29.07% ? p/s ?    > minikube-v1.6.0.iso: 44.39 MiB / 150.93 MiB [--->________] 29.41% ? p/s ?    > minikube-v1.6.0.iso: 46.31 MiB / 150.93 MiB [ 30.69% 4.33 MiB p/s ETA 24s    > minikube-v1.6.0.iso: 51.81 MiB / 150.93 MiB [ 34.33% 4.33 MiB p/s ETA 22s    > minikube-v1.6.0.iso: 59.02 MiB / 150.93 MiB [ 39.11% 4.33 MiB p/s ETA 21s    > minikube-v1.6.0.iso: 65.81 MiB / 150.93 MiB [ 43.60% 6.14 MiB p/s ETA 13s    > minikube-v1.6.0.iso: 72.81 MiB / 150.93 MiB [ 48.24% 6.14 MiB p/s ETA 12s    > minikube-v1.6.0.iso: 80.10 MiB / 150.93 MiB [ 53.07% 6.14 MiB p/s ETA 11s    > minikube-v1.6.0.iso: 86.97 MiB / 150.93 MiB [] 57.62% 8.02 MiB p/s ETA 7s    > minikube-v1.6.0.iso: 93.32 MiB / 150.93 MiB [] 61.83% 8.02 MiB p/s ETA 7s    > minikube-v1.6.0.iso: 99.96 MiB / 150.93 MiB [] 66.23% 8.02 Mi
B p/s ETA 6s    > minikube-v1.6.0.iso: 107.19 MiB / 150.93 MiB [ 71.02% 9.68 MiB p/s ETA 4s    > minikube-v1.6.0.iso: 113.72 MiB / 150.93 MiB [ 75.34% 9.68 MiB p/s ETA 3s    > minikube-v1.6.0.iso: 121.01 MiB / 150.93 MiB [ 80.18% 9.68 MiB p/s ETA 3s    > minikube-v1.6.0.iso: 127.48 MiB / 150.93 MiB  84.47% 11.24 MiB p/s ETA 2s    > minikube-v1.6.0.iso: 134.74 MiB / 150.93 MiB  89.27% 11.24 MiB p/s ETA 1s    > minikube-v1.6.0.iso: 141.95 MiB / 150.93 MiB  94.05% 11.24 MiB p/s ETA 0s    > minikube-v1.6.0.iso: 148.48 MiB / 150.93 MiB  98.38% 12.77 MiB p/s ETA 0s    > minikube-v1.6.0.iso: 150.93 MiB / 150.93 MiB [-] 100.00% 40.90 MiB p/s 4s* 
	X Failed to cache ISO: https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso: Failed to open file for checksum: open /home/jenkins/minikube-integration/17488-80960/.minikube/cache/iso/minikube-v1.6.0.iso.download: no such file or directory
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:196: (dbg) Run:  /tmp/minikube-v1.6.2.673550539.exe start -p stopped-upgrade-634233 --memory=2200 --vm-driver=kvm2 
E1025 21:54:46.751093   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/addons-245571/client.crt: no such file or directory
version_upgrade_test.go:196: (dbg) Done: /tmp/minikube-v1.6.2.673550539.exe start -p stopped-upgrade-634233 --memory=2200 --vm-driver=kvm2 : (1m55.48473624s)
version_upgrade_test.go:205: (dbg) Run:  /tmp/minikube-v1.6.2.673550539.exe -p stopped-upgrade-634233 stop
version_upgrade_test.go:205: (dbg) Done: /tmp/minikube-v1.6.2.673550539.exe -p stopped-upgrade-634233 stop: (13.084391404s)
version_upgrade_test.go:211: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-634233 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:211: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p stopped-upgrade-634233 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 : exit status 109 (15m3.493775591s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-634233] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17488
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17488-80960/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17488-80960/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.3
	* Using the kvm2 driver based on existing profile
	* Starting control plane node stopped-upgrade-634233 in cluster stopped-upgrade-634233
	* Restarting existing kvm2 VM for "stopped-upgrade-634233" ...
	* Preparing Kubernetes v1.17.0 on Docker 19.03.5 ...
	* Another minikube instance is downloading dependencies... 
	* Another minikube instance is downloading dependencies... 
	* Another minikube instance is downloading dependencies... 
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	X Problems detected in kubelet:
	  Oct 25 22:10:32 stopped-upgrade-634233 kubelet[1836]: E1025 22:10:32.398652    1836 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	  Oct 25 22:10:45 stopped-upgrade-634233 kubelet[3111]: E1025 22:10:45.309286    3111 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	  Oct 25 22:10:46 stopped-upgrade-634233 kubelet[3111]: E1025 22:10:46.288468    3111 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 21:55:47.017822  112102 out.go:296] Setting OutFile to fd 1 ...
	I1025 21:55:47.018113  112102 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 21:55:47.018124  112102 out.go:309] Setting ErrFile to fd 2...
	I1025 21:55:47.018129  112102 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 21:55:47.018335  112102 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17488-80960/.minikube/bin
	I1025 21:55:47.018861  112102 out.go:303] Setting JSON to false
	I1025 21:55:47.019852  112102 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":13082,"bootTime":1698257865,"procs":218,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1025 21:55:47.019912  112102 start.go:138] virtualization: kvm guest
	I1025 21:55:47.022232  112102 out.go:177] * [stopped-upgrade-634233] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I1025 21:55:47.024147  112102 notify.go:220] Checking for updates...
	I1025 21:55:47.024210  112102 out.go:177]   - MINIKUBE_LOCATION=17488
	I1025 21:55:47.025702  112102 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 21:55:47.027214  112102 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17488-80960/kubeconfig
	I1025 21:55:47.028641  112102 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17488-80960/.minikube
	I1025 21:55:47.030038  112102 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1025 21:55:47.031589  112102 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 21:55:47.033453  112102 config.go:182] Loaded profile config "stopped-upgrade-634233": Driver=, ContainerRuntime=docker, KubernetesVersion=v1.17.0
	I1025 21:55:47.033468  112102 start_flags.go:689] config upgrade: Driver=kvm2
	I1025 21:55:47.033476  112102 start_flags.go:701] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883
	I1025 21:55:47.033537  112102 profile.go:148] Saving config to /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/stopped-upgrade-634233/config.json ...
	I1025 21:55:47.034127  112102 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1025 21:55:47.034184  112102 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 21:55:47.048676  112102 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46153
	I1025 21:55:47.049134  112102 main.go:141] libmachine: () Calling .GetVersion
	I1025 21:55:47.049734  112102 main.go:141] libmachine: Using API Version  1
	I1025 21:55:47.049760  112102 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 21:55:47.050086  112102 main.go:141] libmachine: () Calling .GetMachineName
	I1025 21:55:47.050255  112102 main.go:141] libmachine: (stopped-upgrade-634233) Calling .DriverName
	I1025 21:55:47.052402  112102 out.go:177] * Kubernetes 1.28.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.3
	I1025 21:55:47.054050  112102 driver.go:378] Setting default libvirt URI to qemu:///system
	I1025 21:55:47.054342  112102 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1025 21:55:47.054389  112102 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 21:55:47.070006  112102 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34455
	I1025 21:55:47.070441  112102 main.go:141] libmachine: () Calling .GetVersion
	I1025 21:55:47.070906  112102 main.go:141] libmachine: Using API Version  1
	I1025 21:55:47.070933  112102 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 21:55:47.071265  112102 main.go:141] libmachine: () Calling .GetMachineName
	I1025 21:55:47.071478  112102 main.go:141] libmachine: (stopped-upgrade-634233) Calling .DriverName
	I1025 21:55:47.108024  112102 out.go:177] * Using the kvm2 driver based on existing profile
	I1025 21:55:47.109321  112102 start.go:298] selected driver: kvm2
	I1025 21:55:47.109339  112102 start.go:902] validating driver "kvm2" against &{Name:stopped-upgrade-634233 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver:kvm2 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0 Clust
erName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:192.168.50.236 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAu
thSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1025 21:55:47.109471  112102 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 21:55:47.110219  112102 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 21:55:47.110313  112102 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17488-80960/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1025 21:55:47.125036  112102 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I1025 21:55:47.125395  112102 cni.go:84] Creating CNI manager for ""
	I1025 21:55:47.125424  112102 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1025 21:55:47.125439  112102 start_flags.go:323] config:
	{Name:stopped-upgrade-634233 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver:kvm2 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0 ClusterName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:192.168.50.236 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1025 21:55:47.125631  112102 iso.go:125] acquiring lock: {Name:mk6659ecb6ed7b24fa2ae65bc0b8e3b5916d75e9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 21:55:47.127455  112102 out.go:177] * Starting control plane node stopped-upgrade-634233 in cluster stopped-upgrade-634233
	I1025 21:55:47.128666  112102 preload.go:132] Checking if preload exists for k8s version v1.17.0 and runtime docker
	W1025 21:55:47.730774  112102 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.17.0/preloaded-images-k8s-v18-v1.17.0-docker-overlay2-amd64.tar.lz4 status code: 404
	I1025 21:55:47.730928  112102 profile.go:148] Saving config to /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/stopped-upgrade-634233/config.json ...
	I1025 21:55:47.731029  112102 cache.go:107] acquiring lock: {Name:mk66722b0c7d0802779bb91cd665f21f019e6dde Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 21:55:47.731031  112102 cache.go:107] acquiring lock: {Name:mk042f89c1e87d68189138597e07a3dbc4e16f22 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 21:55:47.731109  112102 cache.go:107] acquiring lock: {Name:mk7732063a37da305fc0bd9f5b667d3412caf0c6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 21:55:47.731152  112102 cache.go:115] /home/jenkins/minikube-integration/17488-80960/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1025 21:55:47.731158  112102 cache.go:115] /home/jenkins/minikube-integration/17488-80960/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0 exists
	I1025 21:55:47.731166  112102 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17488-80960/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 156.061µs
	I1025 21:55:47.731178  112102 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17488-80960/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1025 21:55:47.731176  112102 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.17.0" -> "/home/jenkins/minikube-integration/17488-80960/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0" took 73.608µs
	I1025 21:55:47.731185  112102 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.17.0 -> /home/jenkins/minikube-integration/17488-80960/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0 succeeded
	I1025 21:55:47.731148  112102 cache.go:107] acquiring lock: {Name:mk1871a19eccad3e50c14cd19f1f8b2380957508 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 21:55:47.731197  112102 cache.go:107] acquiring lock: {Name:mk4e47796820047372558a160f52936b408e80ff Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 21:55:47.731221  112102 start.go:365] acquiring machines lock for stopped-upgrade-634233: {Name:mk84b47429efad52c9c4eeca04f7cb6277d41bb4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 21:55:47.731234  112102 cache.go:115] /home/jenkins/minikube-integration/17488-80960/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0 exists
	I1025 21:55:47.731239  112102 cache.go:115] /home/jenkins/minikube-integration/17488-80960/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0 exists
	I1025 21:55:47.731228  112102 cache.go:107] acquiring lock: {Name:mk1fde4bf99dfe12b10193dcfb3fc9e08e8faf0f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 21:55:47.731243  112102 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.17.0" -> "/home/jenkins/minikube-integration/17488-80960/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0" took 229.277µs
	I1025 21:55:47.731246  112102 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.17.0" -> "/home/jenkins/minikube-integration/17488-80960/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0" took 51.05µs
	I1025 21:55:47.731253  112102 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.17.0 -> /home/jenkins/minikube-integration/17488-80960/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0 succeeded
	I1025 21:55:47.731255  112102 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.17.0 -> /home/jenkins/minikube-integration/17488-80960/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0 succeeded
	I1025 21:55:47.731148  112102 cache.go:107] acquiring lock: {Name:mk5461a3bb7521360d94bbac10f9d5fe42facfe0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 21:55:47.731198  112102 cache.go:107] acquiring lock: {Name:mk4f4fd18a02ec82da75a8b516602f12eb4877dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 21:55:47.731299  112102 cache.go:115] /home/jenkins/minikube-integration/17488-80960/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 exists
	I1025 21:55:47.731313  112102 cache.go:115] /home/jenkins/minikube-integration/17488-80960/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5 exists
	I1025 21:55:47.731333  112102 cache.go:96] cache image "registry.k8s.io/coredns:1.6.5" -> "/home/jenkins/minikube-integration/17488-80960/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5" took 220.958µs
	I1025 21:55:47.731342  112102 cache.go:96] cache image "registry.k8s.io/etcd:3.4.3-0" -> "/home/jenkins/minikube-integration/17488-80960/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0" took 224.727µs
	I1025 21:55:47.731349  112102 cache.go:80] save to tar file registry.k8s.io/coredns:1.6.5 -> /home/jenkins/minikube-integration/17488-80960/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5 succeeded
	I1025 21:55:47.731350  112102 cache.go:115] /home/jenkins/minikube-integration/17488-80960/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 exists
	I1025 21:55:47.731353  112102 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.3-0 -> /home/jenkins/minikube-integration/17488-80960/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 succeeded
	I1025 21:55:47.731364  112102 cache.go:96] cache image "registry.k8s.io/pause:3.1" -> "/home/jenkins/minikube-integration/17488-80960/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1" took 136.053µs
	I1025 21:55:47.731386  112102 cache.go:80] save to tar file registry.k8s.io/pause:3.1 -> /home/jenkins/minikube-integration/17488-80960/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 succeeded
	I1025 21:55:47.731457  112102 cache.go:115] /home/jenkins/minikube-integration/17488-80960/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0 exists
	I1025 21:55:47.731471  112102 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.17.0" -> "/home/jenkins/minikube-integration/17488-80960/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0" took 274.751µs
	I1025 21:55:47.731484  112102 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.17.0 -> /home/jenkins/minikube-integration/17488-80960/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0 succeeded
	I1025 21:55:47.731502  112102 cache.go:87] Successfully saved all images to host disk.
	I1025 21:56:23.953120  112102 start.go:369] acquired machines lock for "stopped-upgrade-634233" in 36.221869178s
	I1025 21:56:23.953188  112102 start.go:96] Skipping create...Using existing machine configuration
	I1025 21:56:23.953201  112102 fix.go:54] fixHost starting: minikube
	I1025 21:56:23.953615  112102 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1025 21:56:23.953659  112102 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 21:56:23.974701  112102 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34477
	I1025 21:56:23.975222  112102 main.go:141] libmachine: () Calling .GetVersion
	I1025 21:56:23.975839  112102 main.go:141] libmachine: Using API Version  1
	I1025 21:56:23.975884  112102 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 21:56:23.976409  112102 main.go:141] libmachine: () Calling .GetMachineName
	I1025 21:56:23.976637  112102 main.go:141] libmachine: (stopped-upgrade-634233) Calling .DriverName
	I1025 21:56:23.976891  112102 main.go:141] libmachine: (stopped-upgrade-634233) Calling .GetState
	I1025 21:56:23.978772  112102 fix.go:102] recreateIfNeeded on stopped-upgrade-634233: state=Stopped err=<nil>
	I1025 21:56:23.978824  112102 main.go:141] libmachine: (stopped-upgrade-634233) Calling .DriverName
	W1025 21:56:23.978999  112102 fix.go:128] unexpected machine state, will restart: <nil>
	I1025 21:56:23.980834  112102 out.go:177] * Restarting existing kvm2 VM for "stopped-upgrade-634233" ...
	I1025 21:56:23.982284  112102 main.go:141] libmachine: (stopped-upgrade-634233) Calling .Start
	I1025 21:56:23.982687  112102 main.go:141] libmachine: (stopped-upgrade-634233) Ensuring networks are active...
	I1025 21:56:23.983450  112102 main.go:141] libmachine: (stopped-upgrade-634233) Ensuring network default is active
	I1025 21:56:23.983869  112102 main.go:141] libmachine: (stopped-upgrade-634233) Ensuring network minikube-net is active
	I1025 21:56:23.984416  112102 main.go:141] libmachine: (stopped-upgrade-634233) Getting domain xml...
	I1025 21:56:23.985278  112102 main.go:141] libmachine: (stopped-upgrade-634233) Creating domain...
	I1025 21:56:25.395743  112102 main.go:141] libmachine: (stopped-upgrade-634233) Waiting to get IP...
	I1025 21:56:25.397051  112102 main.go:141] libmachine: (stopped-upgrade-634233) DBG | domain stopped-upgrade-634233 has defined MAC address 52:54:00:26:b5:da in network minikube-net
	I1025 21:56:25.397629  112102 main.go:141] libmachine: (stopped-upgrade-634233) Found IP for machine: 192.168.50.236
	I1025 21:56:25.397667  112102 main.go:141] libmachine: (stopped-upgrade-634233) DBG | domain stopped-upgrade-634233 has current primary IP address 192.168.50.236 and MAC address 52:54:00:26:b5:da in network minikube-net
	I1025 21:56:25.397678  112102 main.go:141] libmachine: (stopped-upgrade-634233) Reserving static IP address...
	I1025 21:56:25.398326  112102 main.go:141] libmachine: (stopped-upgrade-634233) Reserved static IP address: 192.168.50.236
	I1025 21:56:25.398401  112102 main.go:141] libmachine: (stopped-upgrade-634233) Waiting for SSH to be available...
	I1025 21:56:25.398440  112102 main.go:141] libmachine: (stopped-upgrade-634233) DBG | found host DHCP lease matching {name: "stopped-upgrade-634233", mac: "52:54:00:26:b5:da", ip: "192.168.50.236"} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-10-25 22:54:12 +0000 UTC Type:0 Mac:52:54:00:26:b5:da Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:stopped-upgrade-634233 Clientid:01:52:54:00:26:b5:da}
	I1025 21:56:25.398479  112102 main.go:141] libmachine: (stopped-upgrade-634233) DBG | skip adding static IP to network minikube-net - found existing host DHCP lease matching {name: "stopped-upgrade-634233", mac: "52:54:00:26:b5:da", ip: "192.168.50.236"}
	I1025 21:56:25.398498  112102 main.go:141] libmachine: (stopped-upgrade-634233) DBG | Getting to WaitForSSH function...
	I1025 21:56:25.401554  112102 main.go:141] libmachine: (stopped-upgrade-634233) DBG | domain stopped-upgrade-634233 has defined MAC address 52:54:00:26:b5:da in network minikube-net
	I1025 21:56:25.402046  112102 main.go:141] libmachine: (stopped-upgrade-634233) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:b5:da", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-10-25 22:54:12 +0000 UTC Type:0 Mac:52:54:00:26:b5:da Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:stopped-upgrade-634233 Clientid:01:52:54:00:26:b5:da}
	I1025 21:56:25.402076  112102 main.go:141] libmachine: (stopped-upgrade-634233) DBG | domain stopped-upgrade-634233 has defined IP address 192.168.50.236 and MAC address 52:54:00:26:b5:da in network minikube-net
	I1025 21:56:25.402220  112102 main.go:141] libmachine: (stopped-upgrade-634233) DBG | Using SSH client type: external
	I1025 21:56:25.402601  112102 main.go:141] libmachine: (stopped-upgrade-634233) DBG | Using SSH private key: /home/jenkins/minikube-integration/17488-80960/.minikube/machines/stopped-upgrade-634233/id_rsa (-rw-------)
	I1025 21:56:25.402657  112102 main.go:141] libmachine: (stopped-upgrade-634233) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.236 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17488-80960/.minikube/machines/stopped-upgrade-634233/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1025 21:56:25.402691  112102 main.go:141] libmachine: (stopped-upgrade-634233) DBG | About to run SSH command:
	I1025 21:56:25.402705  112102 main.go:141] libmachine: (stopped-upgrade-634233) DBG | exit 0
	I1025 21:56:42.549288  112102 main.go:141] libmachine: (stopped-upgrade-634233) DBG | SSH cmd err, output: exit status 255: 
	I1025 21:56:42.549326  112102 main.go:141] libmachine: (stopped-upgrade-634233) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1025 21:56:42.549340  112102 main.go:141] libmachine: (stopped-upgrade-634233) DBG | command : exit 0
	I1025 21:56:42.549353  112102 main.go:141] libmachine: (stopped-upgrade-634233) DBG | err     : exit status 255
	I1025 21:56:42.549367  112102 main.go:141] libmachine: (stopped-upgrade-634233) DBG | output  : 
	I1025 21:56:45.550307  112102 main.go:141] libmachine: (stopped-upgrade-634233) DBG | Getting to WaitForSSH function...
	I1025 21:56:45.553110  112102 main.go:141] libmachine: (stopped-upgrade-634233) DBG | domain stopped-upgrade-634233 has defined MAC address 52:54:00:26:b5:da in network minikube-net
	I1025 21:56:45.553477  112102 main.go:141] libmachine: (stopped-upgrade-634233) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:b5:da", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-10-25 22:54:12 +0000 UTC Type:0 Mac:52:54:00:26:b5:da Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:stopped-upgrade-634233 Clientid:01:52:54:00:26:b5:da}
	I1025 21:56:45.553527  112102 main.go:141] libmachine: (stopped-upgrade-634233) DBG | domain stopped-upgrade-634233 has defined IP address 192.168.50.236 and MAC address 52:54:00:26:b5:da in network minikube-net
	I1025 21:56:45.553569  112102 main.go:141] libmachine: (stopped-upgrade-634233) DBG | Using SSH client type: external
	I1025 21:56:45.553620  112102 main.go:141] libmachine: (stopped-upgrade-634233) DBG | Using SSH private key: /home/jenkins/minikube-integration/17488-80960/.minikube/machines/stopped-upgrade-634233/id_rsa (-rw-------)
	I1025 21:56:45.553652  112102 main.go:141] libmachine: (stopped-upgrade-634233) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.236 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17488-80960/.minikube/machines/stopped-upgrade-634233/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1025 21:56:45.553671  112102 main.go:141] libmachine: (stopped-upgrade-634233) DBG | About to run SSH command:
	I1025 21:56:45.553685  112102 main.go:141] libmachine: (stopped-upgrade-634233) DBG | exit 0
	I1025 21:56:51.831645  112102 main.go:141] libmachine: (stopped-upgrade-634233) DBG | SSH cmd err, output: <nil>: 
	I1025 21:56:51.832003  112102 main.go:141] libmachine: (stopped-upgrade-634233) Calling .GetConfigRaw
	I1025 21:56:51.832627  112102 main.go:141] libmachine: (stopped-upgrade-634233) Calling .GetIP
	I1025 21:56:51.834739  112102 main.go:141] libmachine: (stopped-upgrade-634233) DBG | domain stopped-upgrade-634233 has defined MAC address 52:54:00:26:b5:da in network minikube-net
	I1025 21:56:51.835181  112102 main.go:141] libmachine: (stopped-upgrade-634233) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:b5:da", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-10-25 22:56:50 +0000 UTC Type:0 Mac:52:54:00:26:b5:da Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:stopped-upgrade-634233 Clientid:01:52:54:00:26:b5:da}
	I1025 21:56:51.835217  112102 main.go:141] libmachine: (stopped-upgrade-634233) DBG | domain stopped-upgrade-634233 has defined IP address 192.168.50.236 and MAC address 52:54:00:26:b5:da in network minikube-net
	I1025 21:56:51.835468  112102 profile.go:148] Saving config to /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/stopped-upgrade-634233/config.json ...
	I1025 21:56:51.835686  112102 machine.go:88] provisioning docker machine ...
	I1025 21:56:51.835711  112102 main.go:141] libmachine: (stopped-upgrade-634233) Calling .DriverName
	I1025 21:56:51.835926  112102 main.go:141] libmachine: (stopped-upgrade-634233) Calling .GetMachineName
	I1025 21:56:51.836099  112102 buildroot.go:166] provisioning hostname "stopped-upgrade-634233"
	I1025 21:56:51.836123  112102 main.go:141] libmachine: (stopped-upgrade-634233) Calling .GetMachineName
	I1025 21:56:51.836269  112102 main.go:141] libmachine: (stopped-upgrade-634233) Calling .GetSSHHostname
	I1025 21:56:51.838355  112102 main.go:141] libmachine: (stopped-upgrade-634233) DBG | domain stopped-upgrade-634233 has defined MAC address 52:54:00:26:b5:da in network minikube-net
	I1025 21:56:51.838765  112102 main.go:141] libmachine: (stopped-upgrade-634233) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:b5:da", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-10-25 22:56:50 +0000 UTC Type:0 Mac:52:54:00:26:b5:da Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:stopped-upgrade-634233 Clientid:01:52:54:00:26:b5:da}
	I1025 21:56:51.838808  112102 main.go:141] libmachine: (stopped-upgrade-634233) DBG | domain stopped-upgrade-634233 has defined IP address 192.168.50.236 and MAC address 52:54:00:26:b5:da in network minikube-net
	I1025 21:56:51.838985  112102 main.go:141] libmachine: (stopped-upgrade-634233) Calling .GetSSHPort
	I1025 21:56:51.839186  112102 main.go:141] libmachine: (stopped-upgrade-634233) Calling .GetSSHKeyPath
	I1025 21:56:51.839374  112102 main.go:141] libmachine: (stopped-upgrade-634233) Calling .GetSSHKeyPath
	I1025 21:56:51.839577  112102 main.go:141] libmachine: (stopped-upgrade-634233) Calling .GetSSHUsername
	I1025 21:56:51.839816  112102 main.go:141] libmachine: Using SSH client type: native
	I1025 21:56:51.840342  112102 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.50.236 22 <nil> <nil>}
	I1025 21:56:51.840360  112102 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-634233 && echo "stopped-upgrade-634233" | sudo tee /etc/hostname
	I1025 21:56:51.974781  112102 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-634233
	
	I1025 21:56:51.974814  112102 main.go:141] libmachine: (stopped-upgrade-634233) Calling .GetSSHHostname
	I1025 21:56:51.977831  112102 main.go:141] libmachine: (stopped-upgrade-634233) DBG | domain stopped-upgrade-634233 has defined MAC address 52:54:00:26:b5:da in network minikube-net
	I1025 21:56:51.978205  112102 main.go:141] libmachine: (stopped-upgrade-634233) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:b5:da", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-10-25 22:56:50 +0000 UTC Type:0 Mac:52:54:00:26:b5:da Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:stopped-upgrade-634233 Clientid:01:52:54:00:26:b5:da}
	I1025 21:56:51.978237  112102 main.go:141] libmachine: (stopped-upgrade-634233) DBG | domain stopped-upgrade-634233 has defined IP address 192.168.50.236 and MAC address 52:54:00:26:b5:da in network minikube-net
	I1025 21:56:51.978418  112102 main.go:141] libmachine: (stopped-upgrade-634233) Calling .GetSSHPort
	I1025 21:56:51.978632  112102 main.go:141] libmachine: (stopped-upgrade-634233) Calling .GetSSHKeyPath
	I1025 21:56:51.978779  112102 main.go:141] libmachine: (stopped-upgrade-634233) Calling .GetSSHKeyPath
	I1025 21:56:51.978992  112102 main.go:141] libmachine: (stopped-upgrade-634233) Calling .GetSSHUsername
	I1025 21:56:51.979156  112102 main.go:141] libmachine: Using SSH client type: native
	I1025 21:56:51.979469  112102 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.50.236 22 <nil> <nil>}
	I1025 21:56:51.979489  112102 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-634233' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-634233/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-634233' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 21:56:52.113234  112102 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 21:56:52.113266  112102 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17488-80960/.minikube CaCertPath:/home/jenkins/minikube-integration/17488-80960/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17488-80960/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17488-80960/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17488-80960/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17488-80960/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17488-80960/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17488-80960/.minikube}
	I1025 21:56:52.113291  112102 buildroot.go:174] setting up certificates
	I1025 21:56:52.113311  112102 provision.go:83] configureAuth start
	I1025 21:56:52.113349  112102 main.go:141] libmachine: (stopped-upgrade-634233) Calling .GetMachineName
	I1025 21:56:52.113594  112102 main.go:141] libmachine: (stopped-upgrade-634233) Calling .GetIP
	I1025 21:56:52.116350  112102 main.go:141] libmachine: (stopped-upgrade-634233) DBG | domain stopped-upgrade-634233 has defined MAC address 52:54:00:26:b5:da in network minikube-net
	I1025 21:56:52.116826  112102 main.go:141] libmachine: (stopped-upgrade-634233) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:b5:da", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-10-25 22:56:50 +0000 UTC Type:0 Mac:52:54:00:26:b5:da Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:stopped-upgrade-634233 Clientid:01:52:54:00:26:b5:da}
	I1025 21:56:52.116858  112102 main.go:141] libmachine: (stopped-upgrade-634233) DBG | domain stopped-upgrade-634233 has defined IP address 192.168.50.236 and MAC address 52:54:00:26:b5:da in network minikube-net
	I1025 21:56:52.117056  112102 main.go:141] libmachine: (stopped-upgrade-634233) Calling .GetSSHHostname
	I1025 21:56:52.119304  112102 main.go:141] libmachine: (stopped-upgrade-634233) DBG | domain stopped-upgrade-634233 has defined MAC address 52:54:00:26:b5:da in network minikube-net
	I1025 21:56:52.119727  112102 main.go:141] libmachine: (stopped-upgrade-634233) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:b5:da", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-10-25 22:56:50 +0000 UTC Type:0 Mac:52:54:00:26:b5:da Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:stopped-upgrade-634233 Clientid:01:52:54:00:26:b5:da}
	I1025 21:56:52.119770  112102 main.go:141] libmachine: (stopped-upgrade-634233) DBG | domain stopped-upgrade-634233 has defined IP address 192.168.50.236 and MAC address 52:54:00:26:b5:da in network minikube-net
	I1025 21:56:52.119895  112102 provision.go:138] copyHostCerts
	I1025 21:56:52.119954  112102 exec_runner.go:144] found /home/jenkins/minikube-integration/17488-80960/.minikube/cert.pem, removing ...
	I1025 21:56:52.119981  112102 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17488-80960/.minikube/cert.pem
	I1025 21:56:52.120075  112102 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17488-80960/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17488-80960/.minikube/cert.pem (1123 bytes)
	I1025 21:56:52.120193  112102 exec_runner.go:144] found /home/jenkins/minikube-integration/17488-80960/.minikube/key.pem, removing ...
	I1025 21:56:52.120204  112102 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17488-80960/.minikube/key.pem
	I1025 21:56:52.120253  112102 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17488-80960/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17488-80960/.minikube/key.pem (1679 bytes)
	I1025 21:56:52.120308  112102 exec_runner.go:144] found /home/jenkins/minikube-integration/17488-80960/.minikube/ca.pem, removing ...
	I1025 21:56:52.120316  112102 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17488-80960/.minikube/ca.pem
	I1025 21:56:52.120339  112102 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17488-80960/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17488-80960/.minikube/ca.pem (1082 bytes)
	I1025 21:56:52.120380  112102 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17488-80960/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17488-80960/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17488-80960/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-634233 san=[192.168.50.236 192.168.50.236 localhost 127.0.0.1 minikube stopped-upgrade-634233]
	I1025 21:56:52.193166  112102 provision.go:172] copyRemoteCerts
	I1025 21:56:52.193236  112102 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 21:56:52.193262  112102 main.go:141] libmachine: (stopped-upgrade-634233) Calling .GetSSHHostname
	I1025 21:56:52.195941  112102 main.go:141] libmachine: (stopped-upgrade-634233) DBG | domain stopped-upgrade-634233 has defined MAC address 52:54:00:26:b5:da in network minikube-net
	I1025 21:56:52.196210  112102 main.go:141] libmachine: (stopped-upgrade-634233) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:b5:da", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-10-25 22:56:50 +0000 UTC Type:0 Mac:52:54:00:26:b5:da Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:stopped-upgrade-634233 Clientid:01:52:54:00:26:b5:da}
	I1025 21:56:52.196263  112102 main.go:141] libmachine: (stopped-upgrade-634233) DBG | domain stopped-upgrade-634233 has defined IP address 192.168.50.236 and MAC address 52:54:00:26:b5:da in network minikube-net
	I1025 21:56:52.196397  112102 main.go:141] libmachine: (stopped-upgrade-634233) Calling .GetSSHPort
	I1025 21:56:52.196581  112102 main.go:141] libmachine: (stopped-upgrade-634233) Calling .GetSSHKeyPath
	I1025 21:56:52.196757  112102 main.go:141] libmachine: (stopped-upgrade-634233) Calling .GetSSHUsername
	I1025 21:56:52.196908  112102 sshutil.go:53] new ssh client: &{IP:192.168.50.236 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17488-80960/.minikube/machines/stopped-upgrade-634233/id_rsa Username:docker}
	I1025 21:56:52.286373  112102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-80960/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1025 21:56:52.301342  112102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-80960/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1025 21:56:52.315903  112102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-80960/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1025 21:56:52.330603  112102 provision.go:86] duration metric: configureAuth took 217.274705ms
	I1025 21:56:52.330639  112102 buildroot.go:189] setting minikube options for container-runtime
	I1025 21:56:52.330823  112102 config.go:182] Loaded profile config "stopped-upgrade-634233": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.17.0
	I1025 21:56:52.330853  112102 main.go:141] libmachine: (stopped-upgrade-634233) Calling .DriverName
	I1025 21:56:52.331175  112102 main.go:141] libmachine: (stopped-upgrade-634233) Calling .GetSSHHostname
	I1025 21:56:52.334359  112102 main.go:141] libmachine: (stopped-upgrade-634233) DBG | domain stopped-upgrade-634233 has defined MAC address 52:54:00:26:b5:da in network minikube-net
	I1025 21:56:52.334777  112102 main.go:141] libmachine: (stopped-upgrade-634233) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:b5:da", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-10-25 22:56:50 +0000 UTC Type:0 Mac:52:54:00:26:b5:da Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:stopped-upgrade-634233 Clientid:01:52:54:00:26:b5:da}
	I1025 21:56:52.334817  112102 main.go:141] libmachine: (stopped-upgrade-634233) DBG | domain stopped-upgrade-634233 has defined IP address 192.168.50.236 and MAC address 52:54:00:26:b5:da in network minikube-net
	I1025 21:56:52.335002  112102 main.go:141] libmachine: (stopped-upgrade-634233) Calling .GetSSHPort
	I1025 21:56:52.335230  112102 main.go:141] libmachine: (stopped-upgrade-634233) Calling .GetSSHKeyPath
	I1025 21:56:52.335424  112102 main.go:141] libmachine: (stopped-upgrade-634233) Calling .GetSSHKeyPath
	I1025 21:56:52.335606  112102 main.go:141] libmachine: (stopped-upgrade-634233) Calling .GetSSHUsername
	I1025 21:56:52.335799  112102 main.go:141] libmachine: Using SSH client type: native
	I1025 21:56:52.336120  112102 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.50.236 22 <nil> <nil>}
	I1025 21:56:52.336131  112102 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1025 21:56:52.466037  112102 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1025 21:56:52.466067  112102 buildroot.go:70] root file system type: tmpfs
	I1025 21:56:52.466195  112102 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1025 21:56:52.466226  112102 main.go:141] libmachine: (stopped-upgrade-634233) Calling .GetSSHHostname
	I1025 21:56:52.469064  112102 main.go:141] libmachine: (stopped-upgrade-634233) DBG | domain stopped-upgrade-634233 has defined MAC address 52:54:00:26:b5:da in network minikube-net
	I1025 21:56:52.469445  112102 main.go:141] libmachine: (stopped-upgrade-634233) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:b5:da", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-10-25 22:56:50 +0000 UTC Type:0 Mac:52:54:00:26:b5:da Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:stopped-upgrade-634233 Clientid:01:52:54:00:26:b5:da}
	I1025 21:56:52.469481  112102 main.go:141] libmachine: (stopped-upgrade-634233) DBG | domain stopped-upgrade-634233 has defined IP address 192.168.50.236 and MAC address 52:54:00:26:b5:da in network minikube-net
	I1025 21:56:52.469657  112102 main.go:141] libmachine: (stopped-upgrade-634233) Calling .GetSSHPort
	I1025 21:56:52.469861  112102 main.go:141] libmachine: (stopped-upgrade-634233) Calling .GetSSHKeyPath
	I1025 21:56:52.470050  112102 main.go:141] libmachine: (stopped-upgrade-634233) Calling .GetSSHKeyPath
	I1025 21:56:52.470197  112102 main.go:141] libmachine: (stopped-upgrade-634233) Calling .GetSSHUsername
	I1025 21:56:52.470339  112102 main.go:141] libmachine: Using SSH client type: native
	I1025 21:56:52.470717  112102 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.50.236 22 <nil> <nil>}
	I1025 21:56:52.470819  112102 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1025 21:56:52.607454  112102 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1025 21:56:52.607489  112102 main.go:141] libmachine: (stopped-upgrade-634233) Calling .GetSSHHostname
	I1025 21:56:52.610280  112102 main.go:141] libmachine: (stopped-upgrade-634233) DBG | domain stopped-upgrade-634233 has defined MAC address 52:54:00:26:b5:da in network minikube-net
	I1025 21:56:52.610690  112102 main.go:141] libmachine: (stopped-upgrade-634233) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:b5:da", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-10-25 22:56:50 +0000 UTC Type:0 Mac:52:54:00:26:b5:da Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:stopped-upgrade-634233 Clientid:01:52:54:00:26:b5:da}
	I1025 21:56:52.610728  112102 main.go:141] libmachine: (stopped-upgrade-634233) DBG | domain stopped-upgrade-634233 has defined IP address 192.168.50.236 and MAC address 52:54:00:26:b5:da in network minikube-net
	I1025 21:56:52.610871  112102 main.go:141] libmachine: (stopped-upgrade-634233) Calling .GetSSHPort
	I1025 21:56:52.611081  112102 main.go:141] libmachine: (stopped-upgrade-634233) Calling .GetSSHKeyPath
	I1025 21:56:52.611238  112102 main.go:141] libmachine: (stopped-upgrade-634233) Calling .GetSSHKeyPath
	I1025 21:56:52.611482  112102 main.go:141] libmachine: (stopped-upgrade-634233) Calling .GetSSHUsername
	I1025 21:56:52.611725  112102 main.go:141] libmachine: Using SSH client type: native
	I1025 21:56:52.612045  112102 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.50.236 22 <nil> <nil>}
	I1025 21:56:52.612063  112102 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1025 21:56:53.439675  112102 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1025 21:56:53.439706  112102 machine.go:91] provisioned docker machine in 1.604003976s
	I1025 21:56:53.439717  112102 start.go:300] post-start starting for "stopped-upgrade-634233" (driver="kvm2")
	I1025 21:56:53.439740  112102 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 21:56:53.439763  112102 main.go:141] libmachine: (stopped-upgrade-634233) Calling .DriverName
	I1025 21:56:53.440106  112102 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 21:56:53.440142  112102 main.go:141] libmachine: (stopped-upgrade-634233) Calling .GetSSHHostname
	I1025 21:56:53.442783  112102 main.go:141] libmachine: (stopped-upgrade-634233) DBG | domain stopped-upgrade-634233 has defined MAC address 52:54:00:26:b5:da in network minikube-net
	I1025 21:56:53.443143  112102 main.go:141] libmachine: (stopped-upgrade-634233) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:b5:da", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-10-25 22:56:50 +0000 UTC Type:0 Mac:52:54:00:26:b5:da Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:stopped-upgrade-634233 Clientid:01:52:54:00:26:b5:da}
	I1025 21:56:53.443187  112102 main.go:141] libmachine: (stopped-upgrade-634233) DBG | domain stopped-upgrade-634233 has defined IP address 192.168.50.236 and MAC address 52:54:00:26:b5:da in network minikube-net
	I1025 21:56:53.443387  112102 main.go:141] libmachine: (stopped-upgrade-634233) Calling .GetSSHPort
	I1025 21:56:53.443609  112102 main.go:141] libmachine: (stopped-upgrade-634233) Calling .GetSSHKeyPath
	I1025 21:56:53.443809  112102 main.go:141] libmachine: (stopped-upgrade-634233) Calling .GetSSHUsername
	I1025 21:56:53.443954  112102 sshutil.go:53] new ssh client: &{IP:192.168.50.236 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17488-80960/.minikube/machines/stopped-upgrade-634233/id_rsa Username:docker}
	I1025 21:56:53.538118  112102 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 21:56:53.543631  112102 info.go:137] Remote host: Buildroot 2019.02.7
	I1025 21:56:53.543668  112102 filesync.go:126] Scanning /home/jenkins/minikube-integration/17488-80960/.minikube/addons for local assets ...
	I1025 21:56:53.543767  112102 filesync.go:126] Scanning /home/jenkins/minikube-integration/17488-80960/.minikube/files for local assets ...
	I1025 21:56:53.543924  112102 filesync.go:149] local asset: /home/jenkins/minikube-integration/17488-80960/.minikube/files/etc/ssl/certs/882442.pem -> 882442.pem in /etc/ssl/certs
	I1025 21:56:53.544111  112102 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 21:56:53.550794  112102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-80960/.minikube/files/etc/ssl/certs/882442.pem --> /etc/ssl/certs/882442.pem (1708 bytes)
	I1025 21:56:53.565050  112102 start.go:303] post-start completed in 125.318366ms
	I1025 21:56:53.565073  112102 fix.go:56] fixHost completed within 29.611873312s
	I1025 21:56:53.565100  112102 main.go:141] libmachine: (stopped-upgrade-634233) Calling .GetSSHHostname
	I1025 21:56:53.567970  112102 main.go:141] libmachine: (stopped-upgrade-634233) DBG | domain stopped-upgrade-634233 has defined MAC address 52:54:00:26:b5:da in network minikube-net
	I1025 21:56:53.568408  112102 main.go:141] libmachine: (stopped-upgrade-634233) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:b5:da", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-10-25 22:56:50 +0000 UTC Type:0 Mac:52:54:00:26:b5:da Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:stopped-upgrade-634233 Clientid:01:52:54:00:26:b5:da}
	I1025 21:56:53.568444  112102 main.go:141] libmachine: (stopped-upgrade-634233) DBG | domain stopped-upgrade-634233 has defined IP address 192.168.50.236 and MAC address 52:54:00:26:b5:da in network minikube-net
	I1025 21:56:53.568657  112102 main.go:141] libmachine: (stopped-upgrade-634233) Calling .GetSSHPort
	I1025 21:56:53.568893  112102 main.go:141] libmachine: (stopped-upgrade-634233) Calling .GetSSHKeyPath
	I1025 21:56:53.569097  112102 main.go:141] libmachine: (stopped-upgrade-634233) Calling .GetSSHKeyPath
	I1025 21:56:53.569240  112102 main.go:141] libmachine: (stopped-upgrade-634233) Calling .GetSSHUsername
	I1025 21:56:53.569449  112102 main.go:141] libmachine: Using SSH client type: native
	I1025 21:56:53.569940  112102 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.50.236 22 <nil> <nil>}
	I1025 21:56:53.569957  112102 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1025 21:56:53.701183  112102 main.go:141] libmachine: SSH cmd err, output: <nil>: 1698271013.644529385
	
	I1025 21:56:53.701210  112102 fix.go:206] guest clock: 1698271013.644529385
	I1025 21:56:53.701220  112102 fix.go:219] Guest: 2023-10-25 21:56:53.644529385 +0000 UTC Remote: 2023-10-25 21:56:53.565077837 +0000 UTC m=+66.606241784 (delta=79.451548ms)
	I1025 21:56:53.701267  112102 fix.go:190] guest clock delta is within tolerance: 79.451548ms
	I1025 21:56:53.701275  112102 start.go:83] releasing machines lock for "stopped-upgrade-634233", held for 29.74812345s
	I1025 21:56:53.701313  112102 main.go:141] libmachine: (stopped-upgrade-634233) Calling .DriverName
	I1025 21:56:53.701617  112102 main.go:141] libmachine: (stopped-upgrade-634233) Calling .GetIP
	I1025 21:56:53.704359  112102 main.go:141] libmachine: (stopped-upgrade-634233) DBG | domain stopped-upgrade-634233 has defined MAC address 52:54:00:26:b5:da in network minikube-net
	I1025 21:56:53.704791  112102 main.go:141] libmachine: (stopped-upgrade-634233) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:b5:da", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-10-25 22:56:50 +0000 UTC Type:0 Mac:52:54:00:26:b5:da Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:stopped-upgrade-634233 Clientid:01:52:54:00:26:b5:da}
	I1025 21:56:53.704820  112102 main.go:141] libmachine: (stopped-upgrade-634233) DBG | domain stopped-upgrade-634233 has defined IP address 192.168.50.236 and MAC address 52:54:00:26:b5:da in network minikube-net
	I1025 21:56:53.705063  112102 main.go:141] libmachine: (stopped-upgrade-634233) Calling .DriverName
	I1025 21:56:53.705702  112102 main.go:141] libmachine: (stopped-upgrade-634233) Calling .DriverName
	I1025 21:56:53.705916  112102 main.go:141] libmachine: (stopped-upgrade-634233) Calling .DriverName
	I1025 21:56:53.706029  112102 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 21:56:53.706073  112102 main.go:141] libmachine: (stopped-upgrade-634233) Calling .GetSSHHostname
	I1025 21:56:53.706131  112102 ssh_runner.go:195] Run: cat /version.json
	I1025 21:56:53.706172  112102 main.go:141] libmachine: (stopped-upgrade-634233) Calling .GetSSHHostname
	I1025 21:56:53.708989  112102 main.go:141] libmachine: (stopped-upgrade-634233) DBG | domain stopped-upgrade-634233 has defined MAC address 52:54:00:26:b5:da in network minikube-net
	I1025 21:56:53.709397  112102 main.go:141] libmachine: (stopped-upgrade-634233) DBG | domain stopped-upgrade-634233 has defined MAC address 52:54:00:26:b5:da in network minikube-net
	I1025 21:56:53.709444  112102 main.go:141] libmachine: (stopped-upgrade-634233) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:b5:da", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-10-25 22:56:50 +0000 UTC Type:0 Mac:52:54:00:26:b5:da Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:stopped-upgrade-634233 Clientid:01:52:54:00:26:b5:da}
	I1025 21:56:53.709469  112102 main.go:141] libmachine: (stopped-upgrade-634233) DBG | domain stopped-upgrade-634233 has defined IP address 192.168.50.236 and MAC address 52:54:00:26:b5:da in network minikube-net
	I1025 21:56:53.709533  112102 main.go:141] libmachine: (stopped-upgrade-634233) Calling .GetSSHPort
	I1025 21:56:53.709670  112102 main.go:141] libmachine: (stopped-upgrade-634233) Calling .GetSSHKeyPath
	I1025 21:56:53.709833  112102 main.go:141] libmachine: (stopped-upgrade-634233) Calling .GetSSHUsername
	I1025 21:56:53.709854  112102 main.go:141] libmachine: (stopped-upgrade-634233) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:b5:da", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-10-25 22:56:50 +0000 UTC Type:0 Mac:52:54:00:26:b5:da Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:stopped-upgrade-634233 Clientid:01:52:54:00:26:b5:da}
	I1025 21:56:53.709881  112102 main.go:141] libmachine: (stopped-upgrade-634233) DBG | domain stopped-upgrade-634233 has defined IP address 192.168.50.236 and MAC address 52:54:00:26:b5:da in network minikube-net
	I1025 21:56:53.709975  112102 sshutil.go:53] new ssh client: &{IP:192.168.50.236 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17488-80960/.minikube/machines/stopped-upgrade-634233/id_rsa Username:docker}
	I1025 21:56:53.710091  112102 main.go:141] libmachine: (stopped-upgrade-634233) Calling .GetSSHPort
	I1025 21:56:53.710231  112102 main.go:141] libmachine: (stopped-upgrade-634233) Calling .GetSSHKeyPath
	I1025 21:56:53.710353  112102 main.go:141] libmachine: (stopped-upgrade-634233) Calling .GetSSHUsername
	I1025 21:56:53.710504  112102 sshutil.go:53] new ssh client: &{IP:192.168.50.236 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17488-80960/.minikube/machines/stopped-upgrade-634233/id_rsa Username:docker}
	W1025 21:56:53.827258  112102 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1025 21:56:53.827343  112102 ssh_runner.go:195] Run: systemctl --version
	I1025 21:56:53.832682  112102 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 21:56:53.838192  112102 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 21:56:53.838285  112102 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I1025 21:56:53.845857  112102 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I1025 21:56:53.852492  112102 cni.go:305] no active bridge cni configs found in "/etc/cni/net.d" - nothing to configure
	I1025 21:56:53.852515  112102 start.go:472] detecting cgroup driver to use...
	I1025 21:56:53.852659  112102 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 21:56:53.866108  112102 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.1"|' /etc/containerd/config.toml"
	I1025 21:56:53.872964  112102 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1025 21:56:53.880686  112102 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1025 21:56:53.880748  112102 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1025 21:56:53.889458  112102 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1025 21:56:53.896158  112102 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1025 21:56:53.904005  112102 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1025 21:56:53.910964  112102 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 21:56:53.920299  112102 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1025 21:56:53.927421  112102 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 21:56:53.934472  112102 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 21:56:53.940872  112102 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 21:56:54.018241  112102 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1025 21:56:54.034541  112102 start.go:472] detecting cgroup driver to use...
	I1025 21:56:54.034622  112102 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1025 21:56:54.051519  112102 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 21:56:54.062873  112102 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 21:56:54.077877  112102 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 21:56:54.087857  112102 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1025 21:56:54.100985  112102 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 21:56:54.114086  112102 ssh_runner.go:195] Run: which cri-dockerd
	I1025 21:56:54.118655  112102 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1025 21:56:54.125016  112102 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1025 21:56:54.136461  112102 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1025 21:56:54.229546  112102 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1025 21:56:54.322360  112102 docker.go:555] configuring docker to use "cgroupfs" as cgroup driver...
	I1025 21:56:54.322525  112102 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1025 21:56:54.334578  112102 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 21:56:54.425295  112102 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1025 21:56:55.861359  112102 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.435994986s)
	I1025 21:56:55.861436  112102 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1025 21:56:55.911261  112102 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1025 21:56:55.968056  112102 out.go:204] * Preparing Kubernetes v1.17.0 on Docker 19.03.5 ...
	I1025 21:56:55.968102  112102 main.go:141] libmachine: (stopped-upgrade-634233) Calling .GetIP
	I1025 21:56:55.971517  112102 main.go:141] libmachine: (stopped-upgrade-634233) DBG | domain stopped-upgrade-634233 has defined MAC address 52:54:00:26:b5:da in network minikube-net
	I1025 21:56:55.972202  112102 main.go:141] libmachine: (stopped-upgrade-634233) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:26:b5:da", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-10-25 22:56:50 +0000 UTC Type:0 Mac:52:54:00:26:b5:da Iaid: IPaddr:192.168.50.236 Prefix:24 Hostname:stopped-upgrade-634233 Clientid:01:52:54:00:26:b5:da}
	I1025 21:56:55.972299  112102 main.go:141] libmachine: (stopped-upgrade-634233) DBG | domain stopped-upgrade-634233 has defined IP address 192.168.50.236 and MAC address 52:54:00:26:b5:da in network minikube-net
	I1025 21:56:55.972477  112102 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1025 21:56:55.976368  112102 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 21:56:55.985456  112102 localpath.go:92] copying /home/jenkins/minikube-integration/17488-80960/.minikube/client.crt -> /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/stopped-upgrade-634233/client.crt
	I1025 21:56:55.985636  112102 localpath.go:117] copying /home/jenkins/minikube-integration/17488-80960/.minikube/client.key -> /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/stopped-upgrade-634233/client.key
	I1025 21:56:55.985781  112102 preload.go:132] Checking if preload exists for k8s version v1.17.0 and runtime docker
	I1025 21:56:55.985835  112102 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1025 21:56:56.021405  112102 docker.go:693] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.17.0
	k8s.gcr.io/kube-controller-manager:v1.17.0
	k8s.gcr.io/kube-apiserver:v1.17.0
	k8s.gcr.io/kube-scheduler:v1.17.0
	kubernetesui/dashboard:v2.0.0-beta8
	k8s.gcr.io/coredns:1.6.5
	k8s.gcr.io/etcd:3.4.3-0
	kubernetesui/metrics-scraper:v1.0.2
	k8s.gcr.io/kube-addon-manager:v9.0.2
	k8s.gcr.io/pause:3.1
	gcr.io/k8s-minikube/storage-provisioner:v1.8.1
	
	-- /stdout --
	I1025 21:56:56.021435  112102 docker.go:699] registry.k8s.io/kube-apiserver:v1.17.0 wasn't preloaded
	I1025 21:56:56.021446  112102 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.17.0 registry.k8s.io/kube-controller-manager:v1.17.0 registry.k8s.io/kube-scheduler:v1.17.0 registry.k8s.io/kube-proxy:v1.17.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.5 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1025 21:56:56.022934  112102 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.17.0
	I1025 21:56:56.022963  112102 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.17.0
	I1025 21:56:56.023010  112102 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.17.0
	I1025 21:56:56.023162  112102 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I1025 21:56:56.023215  112102 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.17.0
	I1025 21:56:56.023221  112102 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I1025 21:56:56.023258  112102 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.5
	I1025 21:56:56.023264  112102 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 21:56:56.023720  112102 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.17.0
	I1025 21:56:56.023729  112102 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.17.0
	I1025 21:56:56.023785  112102 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I1025 21:56:56.024012  112102 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 21:56:56.024021  112102 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.17.0
	I1025 21:56:56.024041  112102 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.17.0
	I1025 21:56:56.024090  112102 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I1025 21:56:56.024081  112102 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.5: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.5
	I1025 21:56:56.187246  112102 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I1025 21:56:56.194418  112102 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	I1025 21:56:56.215246  112102 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.5
	I1025 21:56:56.235944  112102 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I1025 21:56:56.235997  112102 docker.go:318] Removing image: registry.k8s.io/pause:3.1
	I1025 21:56:56.236038  112102 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.1
	I1025 21:56:56.267373  112102 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.17.0
	I1025 21:56:56.279019  112102 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f" in container runtime
	I1025 21:56:56.279074  112102 docker.go:318] Removing image: registry.k8s.io/etcd:3.4.3-0
	I1025 21:56:56.279122  112102 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.4.3-0
	I1025 21:56:56.289059  112102 cache_images.go:116] "registry.k8s.io/coredns:1.6.5" needs transfer: "registry.k8s.io/coredns:1.6.5" does not exist at hash "70f311871ae12c14bd0e02028f249f933f925e4370744e4e35f706da773a8f61" in container runtime
	I1025 21:56:56.289125  112102 docker.go:318] Removing image: registry.k8s.io/coredns:1.6.5
	I1025 21:56:56.289231  112102 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns:1.6.5
	I1025 21:56:56.304720  112102 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17488-80960/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I1025 21:56:56.304829  112102 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.1
	I1025 21:56:56.371830  112102 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.17.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.17.0" does not exist at hash "0cae8d5cc64c7d8fbdf73ee2be36c77fdabd9e0c7d30da0c12aedf402730bbb2" in container runtime
	I1025 21:56:56.371926  112102 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17488-80960/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
	I1025 21:56:56.371955  112102 docker.go:318] Removing image: registry.k8s.io/kube-apiserver:v1.17.0
	I1025 21:56:56.372027  112102 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.17.0
	I1025 21:56:56.372027  112102 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.4.3-0
	I1025 21:56:56.383426  112102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-80960/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 --> /var/lib/minikube/images/pause_3.1 (318976 bytes)
	I1025 21:56:56.383439  112102 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17488-80960/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5
	I1025 21:56:56.383677  112102 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_1.6.5
	I1025 21:56:56.394357  112102 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.17.0
	I1025 21:56:56.413718  112102 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.17.0
	I1025 21:56:56.433539  112102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-80960/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 --> /var/lib/minikube/images/etcd_3.4.3-0 (100950016 bytes)
	I1025 21:56:56.433630  112102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-80960/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5 --> /var/lib/minikube/images/coredns_1.6.5 (13241856 bytes)
	I1025 21:56:56.433562  112102 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17488-80960/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0
	I1025 21:56:56.433819  112102 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.17.0
	I1025 21:56:56.497573  112102 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.17.0
	I1025 21:56:56.514219  112102 docker.go:285] Loading image: /var/lib/minikube/images/pause_3.1
	I1025 21:56:56.514651  112102 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.1 | docker load"
	I1025 21:56:56.521644  112102 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.17.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.17.0" does not exist at hash "5eb3b7486872441e0943f6e14e9dd5cc1c70bc3047efacbc43d1aa9b7d5b3056" in container runtime
	I1025 21:56:56.521698  112102 docker.go:318] Removing image: registry.k8s.io/kube-controller-manager:v1.17.0
	I1025 21:56:56.521753  112102 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.17.0
	I1025 21:56:56.521830  112102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-80960/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0 --> /var/lib/minikube/images/kube-apiserver_v1.17.0 (50629632 bytes)
	I1025 21:56:56.522438  112102 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.17.0" needs transfer: "registry.k8s.io/kube-proxy:v1.17.0" does not exist at hash "7d54289267dc5a115f940e8b1ea5c20483a5da5ae5bb3ad80107409ed1400f19" in container runtime
	I1025 21:56:56.522503  112102 docker.go:318] Removing image: registry.k8s.io/kube-proxy:v1.17.0
	I1025 21:56:56.522561  112102 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.17.0
	I1025 21:56:56.681267  112102 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.17.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.17.0" does not exist at hash "78c190f736b115876724580513fdf37fa4c3984559dc9e90372b11c21b9cad28" in container runtime
	I1025 21:56:56.681325  112102 docker.go:318] Removing image: registry.k8s.io/kube-scheduler:v1.17.0
	I1025 21:56:56.681417  112102 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.17.0
	I1025 21:56:56.794790  112102 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17488-80960/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 from cache
	I1025 21:56:56.794912  112102 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17488-80960/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0
	I1025 21:56:56.795011  112102 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17488-80960/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0
	I1025 21:56:56.795043  112102 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.17.0
	I1025 21:56:56.795212  112102 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.17.0
	I1025 21:56:56.807630  112102 docker.go:285] Loading image: /var/lib/minikube/images/coredns_1.6.5
	I1025 21:56:56.807663  112102 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_1.6.5 | docker load"
	I1025 21:56:56.854516  112102 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17488-80960/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0
	I1025 21:56:56.854637  112102 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.17.0
	I1025 21:56:56.864936  112102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-80960/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0 --> /var/lib/minikube/images/kube-proxy_v1.17.0 (48705536 bytes)
	I1025 21:56:56.865220  112102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-80960/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0 --> /var/lib/minikube/images/kube-controller-manager_v1.17.0 (48791552 bytes)
	I1025 21:56:57.243248  112102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-80960/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0 --> /var/lib/minikube/images/kube-scheduler_v1.17.0 (33822208 bytes)
	I1025 21:56:57.243557  112102 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17488-80960/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5 from cache
	I1025 21:56:57.607528  112102 docker.go:285] Loading image: /var/lib/minikube/images/kube-apiserver_v1.17.0
	I1025 21:56:57.607565  112102 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-apiserver_v1.17.0 | docker load"
	I1025 21:56:58.049506  112102 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 21:56:58.849331  112102 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-apiserver_v1.17.0 | docker load": (1.241739101s)
	I1025 21:56:58.849370  112102 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17488-80960/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0 from cache
	I1025 21:56:58.849393  112102 docker.go:285] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.17.0
	I1025 21:56:58.849409  112102 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-controller-manager_v1.17.0 | docker load"
	I1025 21:56:58.849432  112102 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1025 21:56:58.849475  112102 docker.go:318] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 21:56:58.849538  112102 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 21:56:59.240594  112102 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17488-80960/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0 from cache
	I1025 21:56:59.240641  112102 docker.go:285] Loading image: /var/lib/minikube/images/kube-proxy_v1.17.0
	I1025 21:56:59.240659  112102 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-proxy_v1.17.0 | docker load"
	I1025 21:56:59.240695  112102 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17488-80960/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1025 21:56:59.240798  112102 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1025 21:56:59.582335  112102 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17488-80960/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0 from cache
	I1025 21:56:59.582399  112102 docker.go:285] Loading image: /var/lib/minikube/images/kube-scheduler_v1.17.0
	I1025 21:56:59.582419  112102 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-scheduler_v1.17.0 | docker load"
	I1025 21:56:59.582564  112102 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1025 21:56:59.582643  112102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-80960/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I1025 21:56:59.846695  112102 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17488-80960/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0 from cache
	I1025 21:56:59.846745  112102 docker.go:285] Loading image: /var/lib/minikube/images/etcd_3.4.3-0
	I1025 21:56:59.846766  112102 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.4.3-0 | docker load"
	I1025 21:57:00.360525  112102 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17488-80960/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 from cache
	I1025 21:57:00.360568  112102 docker.go:285] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1025 21:57:00.360587  112102 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I1025 21:57:00.989530  112102 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17488-80960/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1025 21:57:00.989577  112102 cache_images.go:123] Successfully loaded all cached images
	I1025 21:57:00.989586  112102 cache_images.go:92] LoadImages completed in 4.968121671s
	I1025 21:57:00.989653  112102 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1025 21:57:01.042043  112102 cni.go:84] Creating CNI manager for ""
	I1025 21:57:01.042070  112102 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1025 21:57:01.042093  112102 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1025 21:57:01.042130  112102 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.236 APIServerPort:8443 KubernetesVersion:v1.17.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-634233 NodeName:stopped-upgrade-634233 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.236"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.236 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt S
taticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1025 21:57:01.042323  112102 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.236
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "stopped-upgrade-634233"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.236
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.236"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.17.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 21:57:01.042433  112102 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.17.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=stopped-upgrade-634233 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.236
	
	[Install]
	 config:
	{KubernetesVersion:v1.17.0 ClusterName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1025 21:57:01.042515  112102 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.17.0
	I1025 21:57:01.049616  112102 binaries.go:47] Didn't find k8s binaries: didn't find preexisting kubectl
	Initiating transfer...
	I1025 21:57:01.049678  112102 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.17.0
	I1025 21:57:01.158844  112102 out.go:204] * Another minikube instance is downloading dependencies... 
	I1025 21:57:01.160320  112102 out.go:204] * Another minikube instance is downloading dependencies... 
	I1025 21:57:01.161750  112102 out.go:204] * Another minikube instance is downloading dependencies... 
	I1025 21:57:05.231603  112102 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.17.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.17.0/bin/linux/amd64/kubeadm.sha256
	I1025 21:57:05.231737  112102 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.17.0/kubeadm
	I1025 21:57:05.237439  112102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-80960/.minikube/cache/linux/amd64/v1.17.0/kubeadm --> /var/lib/minikube/binaries/v1.17.0/kubeadm (39342080 bytes)
	I1025 21:57:09.047680  112102 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.17.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.17.0/bin/linux/amd64/kubectl.sha256
	I1025 21:57:09.047835  112102 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.17.0/kubectl
	I1025 21:57:09.058238  112102 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.17.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.17.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/binaries/v1.17.0/kubectl': No such file or directory
	I1025 21:57:09.058288  112102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-80960/.minikube/cache/linux/amd64/v1.17.0/kubectl --> /var/lib/minikube/binaries/v1.17.0/kubectl (43495424 bytes)
	I1025 21:57:50.042000  112102 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.17.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.17.0/bin/linux/amd64/kubelet.sha256
	I1025 21:57:50.042056  112102 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 21:57:50.056063  112102 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.17.0/kubelet
	I1025 21:57:50.061943  112102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-80960/.minikube/cache/linux/amd64/v1.17.0/kubelet --> /var/lib/minikube/binaries/v1.17.0/kubelet (111560216 bytes)
	I1025 21:57:50.434826  112102 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 21:57:50.442229  112102 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (350 bytes)
	I1025 21:57:50.454229  112102 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 21:57:50.467080  112102 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2124 bytes)
	I1025 21:57:50.479416  112102 ssh_runner.go:195] Run: grep 192.168.50.236	control-plane.minikube.internal$ /etc/hosts
	I1025 21:57:50.484166  112102 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.236	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 21:57:50.495919  112102 certs.go:56] Setting up /home/jenkins/minikube-integration/17488-80960/.minikube/profiles for IP: 192.168.50.236
	I1025 21:57:50.495957  112102 certs.go:190] acquiring lock for shared ca certs: {Name:mk95bc4bbfee71bbd045d1866d072591cdac4e29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 21:57:50.496130  112102 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17488-80960/.minikube/ca.key
	I1025 21:57:50.496186  112102 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17488-80960/.minikube/proxy-client-ca.key
	I1025 21:57:50.496260  112102 localpath.go:92] copying /home/jenkins/minikube-integration/17488-80960/.minikube/client.crt -> /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/client.crt
	I1025 21:57:50.496411  112102 localpath.go:117] copying /home/jenkins/minikube-integration/17488-80960/.minikube/client.key -> /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/client.key
	I1025 21:57:50.496567  112102 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/client.key
	I1025 21:57:50.496587  112102 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/apiserver.key.4e4dee8d
	I1025 21:57:50.496614  112102 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/apiserver.crt.4e4dee8d with IP's: [192.168.50.236 10.96.0.1 127.0.0.1 10.0.0.1]
	I1025 21:57:50.609353  112102 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/apiserver.crt.4e4dee8d ...
	I1025 21:57:50.609393  112102 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/apiserver.crt.4e4dee8d: {Name:mke3274006c59a51371ddf063e61cd3592fc8795 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 21:57:50.609622  112102 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/apiserver.key.4e4dee8d ...
	I1025 21:57:50.609643  112102 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/apiserver.key.4e4dee8d: {Name:mkbd85aa1d488ace7ab0f78dacaf385c02ef80a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 21:57:50.609778  112102 certs.go:337] copying /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/apiserver.crt.4e4dee8d -> /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/apiserver.crt
	I1025 21:57:50.609888  112102 certs.go:341] copying /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/apiserver.key.4e4dee8d -> /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/apiserver.key
	I1025 21:57:50.609969  112102 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/proxy-client.key
	I1025 21:57:50.609996  112102 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/proxy-client.crt with IP's: []
	I1025 21:57:51.008579  112102 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/proxy-client.crt ...
	I1025 21:57:51.008610  112102 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/proxy-client.crt: {Name:mk97cda8597bd8dd0454f5b34698d39e84de7a70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 21:57:51.008781  112102 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/proxy-client.key ...
	I1025 21:57:51.008799  112102 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/proxy-client.key: {Name:mkf3d8f627f135177b5d2c5f8f8b6aa33103aeaa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 21:57:51.009029  112102 certs.go:437] found cert: /home/jenkins/minikube-integration/17488-80960/.minikube/certs/home/jenkins/minikube-integration/17488-80960/.minikube/certs/88244.pem (1338 bytes)
	W1025 21:57:51.009082  112102 certs.go:433] ignoring /home/jenkins/minikube-integration/17488-80960/.minikube/certs/home/jenkins/minikube-integration/17488-80960/.minikube/certs/88244_empty.pem, impossibly tiny 0 bytes
	I1025 21:57:51.009099  112102 certs.go:437] found cert: /home/jenkins/minikube-integration/17488-80960/.minikube/certs/home/jenkins/minikube-integration/17488-80960/.minikube/certs/ca-key.pem (1679 bytes)
	I1025 21:57:51.009130  112102 certs.go:437] found cert: /home/jenkins/minikube-integration/17488-80960/.minikube/certs/home/jenkins/minikube-integration/17488-80960/.minikube/certs/ca.pem (1082 bytes)
	I1025 21:57:51.009159  112102 certs.go:437] found cert: /home/jenkins/minikube-integration/17488-80960/.minikube/certs/home/jenkins/minikube-integration/17488-80960/.minikube/certs/cert.pem (1123 bytes)
	I1025 21:57:51.009198  112102 certs.go:437] found cert: /home/jenkins/minikube-integration/17488-80960/.minikube/certs/home/jenkins/minikube-integration/17488-80960/.minikube/certs/key.pem (1679 bytes)
	I1025 21:57:51.009266  112102 certs.go:437] found cert: /home/jenkins/minikube-integration/17488-80960/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17488-80960/.minikube/files/etc/ssl/certs/882442.pem (1708 bytes)
	I1025 21:57:51.009895  112102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1025 21:57:51.027269  112102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1025 21:57:51.042003  112102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 21:57:51.056896  112102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1025 21:57:51.073090  112102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-80960/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 21:57:51.088824  112102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-80960/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1025 21:57:51.105095  112102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-80960/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 21:57:51.120739  112102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-80960/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1025 21:57:51.135554  112102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-80960/.minikube/files/etc/ssl/certs/882442.pem --> /usr/share/ca-certificates/882442.pem (1708 bytes)
	I1025 21:57:51.149734  112102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-80960/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 21:57:51.167049  112102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-80960/.minikube/certs/88244.pem --> /usr/share/ca-certificates/88244.pem (1338 bytes)
	I1025 21:57:51.181709  112102 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (774 bytes)
	I1025 21:57:51.192139  112102 ssh_runner.go:195] Run: openssl version
	I1025 21:57:51.198978  112102 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/882442.pem && ln -fs /usr/share/ca-certificates/882442.pem /etc/ssl/certs/882442.pem"
	I1025 21:57:51.207819  112102 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/882442.pem
	I1025 21:57:51.213258  112102 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 25 21:19 /usr/share/ca-certificates/882442.pem
	I1025 21:57:51.213318  112102 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/882442.pem
	I1025 21:57:51.228434  112102 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/882442.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 21:57:51.238181  112102 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 21:57:51.247027  112102 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 21:57:51.253672  112102 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 25 21:13 /usr/share/ca-certificates/minikubeCA.pem
	I1025 21:57:51.253728  112102 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 21:57:51.266906  112102 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 21:57:51.274400  112102 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/88244.pem && ln -fs /usr/share/ca-certificates/88244.pem /etc/ssl/certs/88244.pem"
	I1025 21:57:51.282082  112102 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/88244.pem
	I1025 21:57:51.287987  112102 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 25 21:19 /usr/share/ca-certificates/88244.pem
	I1025 21:57:51.288040  112102 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/88244.pem
	I1025 21:57:51.300811  112102 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/88244.pem /etc/ssl/certs/51391683.0"
	I1025 21:57:51.308229  112102 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1025 21:57:51.312672  112102 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1025 21:57:51.328249  112102 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1025 21:57:51.343032  112102 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1025 21:57:51.355651  112102 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1025 21:57:51.367489  112102 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1025 21:57:51.378975  112102 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1025 21:57:51.391334  112102 kubeadm.go:404] StartCluster: {Name:stopped-upgrade-634233 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver:kvm2 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVe
rsion:v1.17.0 ClusterName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:192.168.50.236 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPa
th: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1025 21:57:51.391478  112102 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1025 21:57:51.430350  112102 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 21:57:51.439323  112102 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1025 21:57:51.439351  112102 kubeadm.go:636] restartCluster start
	I1025 21:57:51.439399  112102 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1025 21:57:51.445868  112102 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1025 21:57:51.446460  112102 kubeconfig.go:135] verify returned: extract IP: "stopped-upgrade-634233" does not appear in /home/jenkins/minikube-integration/17488-80960/kubeconfig
	I1025 21:57:51.446615  112102 kubeconfig.go:146] "stopped-upgrade-634233" context is missing from /home/jenkins/minikube-integration/17488-80960/kubeconfig - will repair!
	I1025 21:57:51.446998  112102 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17488-80960/kubeconfig: {Name:mk4723f12542c40c1c944f4b4dc7af3f0a23b0b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 21:57:51.447831  112102 kapi.go:59] client config for stopped-upgrade-634233: &rest.Config{Host:"https://192.168.50.236:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17488-80960/.minikube/profiles/stopped-upgrade-634233/client.crt", KeyFile:"/home/jenkins/minikube-integration/17488-80960/.minikube/profiles/stopped-upgrade-634233/client.key", CAFile:"/home/jenkins/minikube-integration/17488-80960/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData
:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c28ba0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1025 21:57:51.448946  112102 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1025 21:57:51.454202  112102 kubeadm.go:602] needs reconfigure: configs differ:
	
	** stderr ** 
	diff: can't stat '/var/tmp/minikube/kubeadm.yaml': No such file or directory
	
	** /stderr **
	I1025 21:57:51.454218  112102 kubeadm.go:1128] stopping kube-system containers ...
	I1025 21:57:51.454276  112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1025 21:57:51.489478  112102 docker.go:464] Stopping containers: [111a4f5088ac 53138481ecbd a131edff470e 09fabc795729 46604f6a66ea 2d616a9c0cbc eab03f304139 a4dfe92c6dc7 52d24719c1f3 ecbc25e58349]
	I1025 21:57:51.489556  112102 ssh_runner.go:195] Run: docker stop 111a4f5088ac 53138481ecbd a131edff470e 09fabc795729 46604f6a66ea 2d616a9c0cbc eab03f304139 a4dfe92c6dc7 52d24719c1f3 ecbc25e58349
	I1025 21:57:51.531256  112102 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1025 21:57:51.543077  112102 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1025 21:57:51.550268  112102 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1025 21:57:51.550336  112102 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1025 21:57:51.557283  112102 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1025 21:57:51.557307  112102 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1025 21:57:51.629509  112102 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1025 21:57:52.686417  112102 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.05686163s)
	I1025 21:57:52.686457  112102 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1025 21:57:52.944829  112102 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1025 21:57:53.063902  112102 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1025 21:57:53.173371  112102 api_server.go:52] waiting for apiserver process to appear ...
	I1025 21:57:53.173454  112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 21:57:53.187999  112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 21:57:53.697472  112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 21:57:54.197984  112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 21:57:54.697638  112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 21:57:55.197736  112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 21:57:55.697345  112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 21:57:56.197293  112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 21:57:56.697277  112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 21:57:57.197744  112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 21:57:57.697308  112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 21:57:58.198145  112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 21:57:58.697515  112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 21:57:59.197973  112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 21:57:59.698007  112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 21:58:00.197325  112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 21:58:00.697580  112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 21:58:01.197459  112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 21:58:01.697471  112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 21:58:02.197683  112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 21:58:02.698234  112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 21:58:03.197789  112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 21:58:03.697521  112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 21:58:04.198089  112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 21:58:04.697863  112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 21:58:05.197359  112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 21:58:05.698192  112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 21:58:06.197339  112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 21:58:06.697321  112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 21:58:07.197823  112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 21:58:07.697508  112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 21:58:08.197340  112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 21:58:08.697359  112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 21:58:09.198155  112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 21:58:09.697985  112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 21:58:10.197319  112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 21:58:10.698299  112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 21:58:11.197871  112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 21:58:11.697358  112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 21:58:12.197827  112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 21:58:12.697644  112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 21:58:13.197514  112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 21:58:13.698125  112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 21:58:14.197655  112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 21:58:14.697698  112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 21:58:15.197343  112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 21:58:15.697296  112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 21:58:16.200675  112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 21:58:16.697598  112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 21:58:17.197527  112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 21:58:17.699520  112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 21:58:18.197999  112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 21:58:18.697318  112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 21:58:19.197427  112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 21:58:19.698114  112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 21:58:20.197223  112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 21:58:20.697685  112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 21:58:21.197739  112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 21:58:21.698068  112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 21:58:22.197730  112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 21:58:22.698086  112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 21:58:23.197325  112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 21:58:23.698261  112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 21:58:24.197867  112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 21:58:24.698110  112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 21:58:25.197398  112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 21:58:25.697858  112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 21:58:26.197314  112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 21:58:26.697534  112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 21:58:27.208952  112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 21:58:27.697321  112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 21:58:28.197987  112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 21:58:28.698117  112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 21:58:29.197909  112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 21:58:29.698056  112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 21:58:30.197638  112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 21:58:30.698096  112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 21:58:31.197402  112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 21:58:31.698244  112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 21:58:32.197911  112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 21:58:32.697362  112102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 21:58:32.707681  112102 api_server.go:72] duration metric: took 39.534307295s to wait for apiserver process to appear ...
	I1025 21:58:32.707711  112102 api_server.go:88] waiting for apiserver healthz status ...
	I1025 21:58:32.707731  112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
	I1025 21:58:32.708721  112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
	I1025 21:58:32.708789  112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
	I1025 21:58:32.709323  112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
	I1025 21:58:33.210148  112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
	I1025 21:58:33.210940  112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
	I1025 21:58:33.709487  112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
	I1025 21:58:33.710073  112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
	I1025 21:58:34.210185  112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
	I1025 21:58:34.210896  112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
	I1025 21:58:34.709436  112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
	I1025 21:58:34.710147  112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
	I1025 21:58:35.209617  112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
	I1025 21:58:35.210240  112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
	I1025 21:58:35.709787  112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
	I1025 21:58:35.710463  112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
	I1025 21:58:36.209807  112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
	I1025 21:58:36.210463  112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
	I1025 21:58:36.710078  112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
	I1025 21:58:36.710839  112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
	I1025 21:58:37.209975  112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
	I1025 21:58:37.210593  112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
	I1025 21:58:37.710182  112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
	I1025 21:58:37.710782  112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
	I1025 21:58:38.210399  112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
	I1025 21:58:38.211082  112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
	I1025 21:58:38.709530  112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
	I1025 21:58:38.710155  112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
	I1025 21:58:39.210275  112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
	I1025 21:58:39.210866  112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
	I1025 21:58:39.710160  112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
	I1025 21:58:39.710772  112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
	I1025 21:58:40.210390  112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
	I1025 21:58:40.210985  112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
	I1025 21:58:40.710229  112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
	I1025 21:58:40.710837  112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
	I1025 21:58:41.210455  112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
	I1025 21:58:41.211126  112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
	I1025 21:58:41.709816  112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
	I1025 21:58:41.710389  112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
	I1025 21:58:42.210057  112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
	I1025 21:58:42.210725  112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
	I1025 21:58:42.710341  112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
	I1025 21:58:42.710887  112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
	I1025 21:58:43.209412  112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
	I1025 21:58:43.209918  112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
	I1025 21:58:43.710307  112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
	I1025 21:58:43.710917  112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
	I1025 21:58:44.209971  112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
	I1025 21:58:44.210602  112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
	I1025 21:58:44.710234  112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
	I1025 21:58:44.710853  112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
	I1025 21:58:45.210501  112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
	I1025 21:58:45.211310  112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
	I1025 21:58:45.709512  112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
	I1025 21:58:45.710117  112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
	I1025 21:58:46.209678  112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
	I1025 21:58:46.210488  112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
	I1025 21:58:46.710059  112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
	I1025 21:58:46.710656  112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
	I1025 21:58:47.210450  112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
	I1025 21:58:47.211062  112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
	I1025 21:58:47.710358  112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
	I1025 21:58:47.710985  112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
	I1025 21:58:48.210206  112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
	I1025 21:58:48.210880  112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
	I1025 21:58:48.709404  112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
	I1025 21:58:48.710014  112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
	I1025 21:58:49.210025  112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
	I1025 21:58:49.210895  112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
	I1025 21:58:49.710044  112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
	I1025 21:58:49.710791  112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
	I1025 21:58:50.210343  112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
	I1025 21:58:50.210955  112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
	I1025 21:58:50.709959  112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
	I1025 21:58:50.710624  112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
	I1025 21:58:51.210276  112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
	I1025 21:58:51.210948  112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
	I1025 21:58:51.709697  112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
	I1025 21:58:51.710376  112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
	I1025 21:58:52.210290  112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
	I1025 21:58:52.211031  112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
	I1025 21:58:52.710316  112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
	I1025 21:58:52.710977  112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
	I1025 21:58:53.209542  112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
	I1025 21:58:53.210341  112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
	I1025 21:58:53.709863  112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
	I1025 21:58:53.710577  112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
	I1025 21:58:54.209553  112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
	I1025 21:58:54.210250  112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
	I1025 21:58:54.709667  112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
	I1025 21:58:54.710368  112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
	I1025 21:58:55.209947  112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
	I1025 21:58:55.210627  112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
	I1025 21:58:55.710166  112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
	I1025 21:58:55.710848  112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
	I1025 21:58:56.210468  112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
	I1025 21:58:56.211173  112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
	I1025 21:58:56.709705  112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
	I1025 21:58:56.710366  112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
	I1025 21:58:57.210362  112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
	I1025 21:58:57.211132  112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
	I1025 21:58:57.709746  112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
	I1025 21:58:57.710411  112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
	I1025 21:58:58.209502  112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
	I1025 21:58:58.210078  112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
	I1025 21:58:58.709619  112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
	I1025 21:58:58.710316  112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
	I1025 21:58:59.210128  112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
	I1025 21:58:59.210748  112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
	I1025 21:58:59.710476  112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
	I1025 21:58:59.711241  112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
	I1025 21:59:00.209701  112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
	I1025 21:59:00.210495  112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
	I1025 21:59:00.710093  112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
	I1025 21:59:00.710624  112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
	I1025 21:59:01.210227  112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
	I1025 21:59:01.210838  112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
	I1025 21:59:01.709506  112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
	I1025 21:59:01.710179  112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
	I1025 21:59:02.210102  112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
	I1025 21:59:02.212604  112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
	I1025 21:59:02.710269  112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
	I1025 21:59:02.710889  112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
	I1025 21:59:03.209458  112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
	I1025 21:59:03.210085  112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
	I1025 21:59:03.709644  112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
	I1025 21:59:03.710269  112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
	I1025 21:59:04.210280  112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
	I1025 21:59:04.211015  112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
	I1025 21:59:04.709538  112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
	I1025 21:59:04.710289  112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
	I1025 21:59:05.209817  112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
	I1025 21:59:05.210566  112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
	I1025 21:59:05.709858  112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
	I1025 21:59:05.710516  112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
	I1025 21:59:06.209747  112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
	I1025 21:59:06.210470  112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
	I1025 21:59:06.710141  112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
	I1025 21:59:06.710773  112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
	I1025 21:59:07.209984  112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
	I1025 21:59:07.210686  112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
	I1025 21:59:07.710309  112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
	I1025 21:59:07.710946  112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
	I1025 21:59:08.209473  112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
	I1025 21:59:08.210141  112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
	I1025 21:59:08.710338  112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
	I1025 21:59:08.710898  112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
	I1025 21:59:09.210190  112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
	I1025 21:59:09.210821  112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
	I1025 21:59:09.710115  112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
	I1025 21:59:09.710782  112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
	I1025 21:59:10.210085  112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
	I1025 21:59:10.210770  112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
	I1025 21:59:10.710336  112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
	I1025 21:59:10.711000  112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
	I1025 21:59:11.209559  112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
	I1025 21:59:11.210254  112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
	I1025 21:59:11.709865  112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
	I1025 21:59:11.710586  112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
	I1025 21:59:12.210355  112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
	I1025 21:59:12.210940  112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
	I1025 21:59:12.709487  112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
	I1025 21:59:12.710163  112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
	I1025 21:59:13.209712  112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
	I1025 21:59:13.210388  112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
	I1025 21:59:13.709947  112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
	I1025 21:59:13.710631  112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
	I1025 21:59:14.210417  112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
	I1025 21:59:14.210951  112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
	I1025 21:59:14.709489  112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
	I1025 21:59:14.710118  112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
	I1025 21:59:15.209668  112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
	I1025 21:59:15.210289  112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
	I1025 21:59:15.709927  112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
	I1025 21:59:15.710610  112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
	I1025 21:59:16.210322  112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
	I1025 21:59:16.211138  112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
	I1025 21:59:16.709681  112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
	I1025 21:59:16.710367  112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
	I1025 21:59:17.210418  112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
	I1025 21:59:17.211086  112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
	I1025 21:59:17.709632  112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
	I1025 21:59:17.710312  112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
	I1025 21:59:18.209506  112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
	I1025 21:59:18.210210  112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
	I1025 21:59:18.709757  112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
	I1025 21:59:18.710506  112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
	I1025 21:59:19.209477  112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
	I1025 21:59:19.210099  112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
	I1025 21:59:19.709476  112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
	I1025 21:59:19.710136  112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
	I1025 21:59:20.210368  112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
	I1025 21:59:20.211099  112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
	I1025 21:59:20.709488  112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
	I1025 21:59:20.710098  112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
	I1025 21:59:21.209668  112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
	I1025 21:59:21.210461  112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
	I1025 21:59:21.709923  112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
	I1025 21:59:21.710552  112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
	I1025 21:59:22.210271  112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
	I1025 21:59:22.211003  112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
	I1025 21:59:22.710396  112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
	I1025 21:59:22.711085  112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
	I1025 21:59:23.209611  112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
	I1025 21:59:23.210254  112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
	I1025 21:59:23.710469  112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
	I1025 21:59:23.711058  112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
	I1025 21:59:24.209846  112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
	I1025 21:59:24.210469  112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
	I1025 21:59:24.710017  112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
	I1025 21:59:24.710608  112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
	I1025 21:59:25.209760  112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
	I1025 21:59:25.210443  112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
	I1025 21:59:25.709743  112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
	I1025 21:59:25.710473  112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
	I1025 21:59:26.210089  112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
	I1025 21:59:26.210734  112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
	I1025 21:59:26.709963  112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
	I1025 21:59:26.710627  112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
	I1025 21:59:27.210338  112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
	I1025 21:59:27.211038  112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
	I1025 21:59:27.710244  112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
	I1025 21:59:27.711037  112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
	I1025 21:59:28.210433  112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
	I1025 21:59:28.211284  112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
	I1025 21:59:28.709825  112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
	I1025 21:59:28.710496  112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
	I1025 21:59:29.210468  112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
	I1025 21:59:29.211125  112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
	I1025 21:59:29.709488  112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
	I1025 21:59:29.710254  112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
	I1025 21:59:30.209498  112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
	I1025 21:59:30.210197  112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
	I1025 21:59:30.709517  112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
	I1025 21:59:30.710142  112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
	I1025 21:59:31.209708  112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
	I1025 21:59:31.210398  112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
	I1025 21:59:31.710105  112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
	I1025 21:59:31.710810  112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
	I1025 21:59:32.210092  112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
	I1025 21:59:32.210753  112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
	I1025 21:59:32.710490  112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 21:59:32.764127  112102 logs.go:284] 1 containers: [615f2a0c1ed5]
	I1025 21:59:32.764211  112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 21:59:32.809251  112102 logs.go:284] 2 containers: [351d1be3fc41 111a4f5088ac]
	I1025 21:59:32.809345  112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 21:59:32.855804  112102 logs.go:284] 0 containers: []
	W1025 21:59:32.855828  112102 logs.go:286] No container was found matching "coredns"
	I1025 21:59:32.855882  112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 21:59:32.921645  112102 logs.go:284] 2 containers: [8464245274b1 09fabc795729]
	I1025 21:59:32.921715  112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 21:59:32.961462  112102 logs.go:284] 0 containers: []
	W1025 21:59:32.961490  112102 logs.go:286] No container was found matching "kube-proxy"
	I1025 21:59:32.961549  112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 21:59:32.997755  112102 logs.go:284] 2 containers: [a3ae303714a2 53138481ecbd]
	I1025 21:59:32.997850  112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 21:59:33.034926  112102 logs.go:284] 0 containers: []
	W1025 21:59:33.034957  112102 logs.go:286] No container was found matching "kindnet"
	I1025 21:59:33.034970  112102 logs.go:123] Gathering logs for kubelet ...
	I1025 21:59:33.034986  112102 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1025 21:59:33.074719  112102 logs.go:138] Found kubelet problem: Oct 25 21:59:14 stopped-upgrade-634233 kubelet[6297]: E1025 21:59:14.747557    6297 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	W1025 21:59:33.076385  112102 logs.go:138] Found kubelet problem: Oct 25 21:59:15 stopped-upgrade-634233 kubelet[6297]: E1025 21:59:15.692044    6297 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	W1025 21:59:33.082949  112102 logs.go:138] Found kubelet problem: Oct 25 21:59:19 stopped-upgrade-634233 kubelet[6297]: E1025 21:59:19.485561    6297 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	W1025 21:59:33.092592  112102 logs.go:138] Found kubelet problem: Oct 25 21:59:25 stopped-upgrade-634233 kubelet[6297]: E1025 21:59:25.808146    6297 pod_workers.go:191] Error syncing pod 603b914543a305bf066dc8de01ce2232 ("kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"
	W1025 21:59:33.095629  112102 logs.go:138] Found kubelet problem: Oct 25 21:59:27 stopped-upgrade-634233 kubelet[6297]: E1025 21:59:27.816404    6297 pod_workers.go:191] Error syncing pod 603b914543a305bf066dc8de01ce2232 ("kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"
	W1025 21:59:33.100891  112102 logs.go:138] Found kubelet problem: Oct 25 21:59:30 stopped-upgrade-634233 kubelet[6297]: E1025 21:59:30.889849    6297 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	W1025 21:59:33.103159  112102 logs.go:138] Found kubelet problem: Oct 25 21:59:31 stopped-upgrade-634233 kubelet[6297]: E1025 21:59:31.919778    6297 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	I1025 21:59:33.105196  112102 logs.go:123] Gathering logs for dmesg ...
	I1025 21:59:33.105220  112102 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 21:59:33.117307  112102 logs.go:123] Gathering logs for describe nodes ...
	I1025 21:59:33.117331  112102 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 21:59:33.194424  112102 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	error: tls: private key does not match public key
	 output: 
	** stderr ** 
	error: tls: private key does not match public key
	
	** /stderr **
	I1025 21:59:33.194451  112102 logs.go:123] Gathering logs for etcd [351d1be3fc41] ...
	I1025 21:59:33.194469  112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 351d1be3fc41"
	I1025 21:59:33.235178  112102 logs.go:123] Gathering logs for kube-scheduler [8464245274b1] ...
	I1025 21:59:33.235216  112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8464245274b1"
	I1025 21:59:33.354812  112102 logs.go:123] Gathering logs for kube-scheduler [09fabc795729] ...
	I1025 21:59:33.354846  112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09fabc795729"
	I1025 21:59:33.406183  112102 logs.go:123] Gathering logs for kube-controller-manager [a3ae303714a2] ...
	I1025 21:59:33.406213  112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3ae303714a2"
	I1025 21:59:33.450038  112102 logs.go:123] Gathering logs for kube-apiserver [615f2a0c1ed5] ...
	I1025 21:59:33.450071  112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 615f2a0c1ed5"
	I1025 21:59:33.524729  112102 logs.go:123] Gathering logs for etcd [111a4f5088ac] ...
	I1025 21:59:33.524761  112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 111a4f5088ac"
	I1025 21:59:33.567717  112102 logs.go:123] Gathering logs for kube-controller-manager [53138481ecbd] ...
	I1025 21:59:33.567750  112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53138481ecbd"
	I1025 21:59:33.614986  112102 logs.go:123] Gathering logs for Docker ...
	I1025 21:59:33.615015  112102 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 21:59:33.656670  112102 logs.go:123] Gathering logs for container status ...
	I1025 21:59:33.656707  112102 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 21:59:33.684116  112102 out.go:309] Setting ErrFile to fd 2...
	I1025 21:59:33.684148  112102 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1025 21:59:33.684211  112102 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1025 21:59:33.684262  112102 out.go:239]   Oct 25 21:59:19 stopped-upgrade-634233 kubelet[6297]: E1025 21:59:19.485561    6297 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	  Oct 25 21:59:19 stopped-upgrade-634233 kubelet[6297]: E1025 21:59:19.485561    6297 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	W1025 21:59:33.684285  112102 out.go:239]   Oct 25 21:59:25 stopped-upgrade-634233 kubelet[6297]: E1025 21:59:25.808146    6297 pod_workers.go:191] Error syncing pod 603b914543a305bf066dc8de01ce2232 ("kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"
	  Oct 25 21:59:25 stopped-upgrade-634233 kubelet[6297]: E1025 21:59:25.808146    6297 pod_workers.go:191] Error syncing pod 603b914543a305bf066dc8de01ce2232 ("kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"
	W1025 21:59:33.684297  112102 out.go:239]   Oct 25 21:59:27 stopped-upgrade-634233 kubelet[6297]: E1025 21:59:27.816404    6297 pod_workers.go:191] Error syncing pod 603b914543a305bf066dc8de01ce2232 ("kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"
	  Oct 25 21:59:27 stopped-upgrade-634233 kubelet[6297]: E1025 21:59:27.816404    6297 pod_workers.go:191] Error syncing pod 603b914543a305bf066dc8de01ce2232 ("kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"
	W1025 21:59:33.684305  112102 out.go:239]   Oct 25 21:59:30 stopped-upgrade-634233 kubelet[6297]: E1025 21:59:30.889849    6297 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	  Oct 25 21:59:30 stopped-upgrade-634233 kubelet[6297]: E1025 21:59:30.889849    6297 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	W1025 21:59:33.684316  112102 out.go:239]   Oct 25 21:59:31 stopped-upgrade-634233 kubelet[6297]: E1025 21:59:31.919778    6297 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	  Oct 25 21:59:31 stopped-upgrade-634233 kubelet[6297]: E1025 21:59:31.919778    6297 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	I1025 21:59:33.684329  112102 out.go:309] Setting ErrFile to fd 2...
	I1025 21:59:33.684338  112102 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 21:59:43.685399  112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
	I1025 21:59:43.686177  112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
	I1025 21:59:43.686302  112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 21:59:43.719867  112102 logs.go:284] 1 containers: [615f2a0c1ed5]
	I1025 21:59:43.719964  112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 21:59:43.759211  112102 logs.go:284] 2 containers: [351d1be3fc41 111a4f5088ac]
	I1025 21:59:43.759282  112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 21:59:43.796185  112102 logs.go:284] 0 containers: []
	W1025 21:59:43.796227  112102 logs.go:286] No container was found matching "coredns"
	I1025 21:59:43.796303  112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 21:59:43.834577  112102 logs.go:284] 2 containers: [8464245274b1 09fabc795729]
	I1025 21:59:43.834659  112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 21:59:43.874092  112102 logs.go:284] 0 containers: []
	W1025 21:59:43.874121  112102 logs.go:286] No container was found matching "kube-proxy"
	I1025 21:59:43.874196  112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 21:59:43.921307  112102 logs.go:284] 3 containers: [c83026fba0c7 a3ae303714a2 53138481ecbd]
	I1025 21:59:43.921435  112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 21:59:43.957485  112102 logs.go:284] 0 containers: []
	W1025 21:59:43.957512  112102 logs.go:286] No container was found matching "kindnet"
	I1025 21:59:43.957533  112102 logs.go:123] Gathering logs for etcd [111a4f5088ac] ...
	I1025 21:59:43.957552  112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 111a4f5088ac"
	I1025 21:59:43.999404  112102 logs.go:123] Gathering logs for kube-controller-manager [53138481ecbd] ...
	I1025 21:59:43.999440  112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53138481ecbd"
	I1025 21:59:44.053794  112102 logs.go:123] Gathering logs for kube-apiserver [615f2a0c1ed5] ...
	I1025 21:59:44.053828  112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 615f2a0c1ed5"
	I1025 21:59:44.118900  112102 logs.go:123] Gathering logs for kube-controller-manager [c83026fba0c7] ...
	I1025 21:59:44.118931  112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c83026fba0c7"
	I1025 21:59:44.152873  112102 logs.go:123] Gathering logs for Docker ...
	I1025 21:59:44.152924  112102 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 21:59:44.183998  112102 logs.go:123] Gathering logs for describe nodes ...
	I1025 21:59:44.184032  112102 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 21:59:44.251422  112102 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	error: tls: private key does not match public key
	 output: 
	** stderr ** 
	error: tls: private key does not match public key
	
	** /stderr **
	I1025 21:59:44.251445  112102 logs.go:123] Gathering logs for kube-controller-manager [a3ae303714a2] ...
	I1025 21:59:44.251460  112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3ae303714a2"
	I1025 21:59:44.289549  112102 logs.go:123] Gathering logs for kubelet ...
	I1025 21:59:44.289579  112102 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1025 21:59:44.308813  112102 logs.go:138] Found kubelet problem: Oct 25 21:59:19 stopped-upgrade-634233 kubelet[6297]: E1025 21:59:19.485561    6297 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	W1025 21:59:44.318555  112102 logs.go:138] Found kubelet problem: Oct 25 21:59:25 stopped-upgrade-634233 kubelet[6297]: E1025 21:59:25.808146    6297 pod_workers.go:191] Error syncing pod 603b914543a305bf066dc8de01ce2232 ("kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"
	W1025 21:59:44.321643  112102 logs.go:138] Found kubelet problem: Oct 25 21:59:27 stopped-upgrade-634233 kubelet[6297]: E1025 21:59:27.816404    6297 pod_workers.go:191] Error syncing pod 603b914543a305bf066dc8de01ce2232 ("kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"
	W1025 21:59:44.326759  112102 logs.go:138] Found kubelet problem: Oct 25 21:59:30 stopped-upgrade-634233 kubelet[6297]: E1025 21:59:30.889849    6297 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	W1025 21:59:44.328729  112102 logs.go:138] Found kubelet problem: Oct 25 21:59:31 stopped-upgrade-634233 kubelet[6297]: E1025 21:59:31.919778    6297 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	W1025 21:59:44.340716  112102 logs.go:138] Found kubelet problem: Oct 25 21:59:39 stopped-upgrade-634233 kubelet[6297]: E1025 21:59:39.483253    6297 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	I1025 21:59:44.348449  112102 logs.go:123] Gathering logs for dmesg ...
	I1025 21:59:44.348471  112102 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 21:59:44.364333  112102 logs.go:123] Gathering logs for etcd [351d1be3fc41] ...
	I1025 21:59:44.364364  112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 351d1be3fc41"
	I1025 21:59:44.418505  112102 logs.go:123] Gathering logs for kube-scheduler [8464245274b1] ...
	I1025 21:59:44.418541  112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8464245274b1"
	I1025 21:59:44.525393  112102 logs.go:123] Gathering logs for kube-scheduler [09fabc795729] ...
	I1025 21:59:44.525439  112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09fabc795729"
	I1025 21:59:44.573844  112102 logs.go:123] Gathering logs for container status ...
	I1025 21:59:44.573876  112102 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 21:59:44.598877  112102 out.go:309] Setting ErrFile to fd 2...
	I1025 21:59:44.598902  112102 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1025 21:59:44.598957  112102 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1025 21:59:44.599004  112102 out.go:239]   Oct 25 21:59:25 stopped-upgrade-634233 kubelet[6297]: E1025 21:59:25.808146    6297 pod_workers.go:191] Error syncing pod 603b914543a305bf066dc8de01ce2232 ("kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"
	  Oct 25 21:59:25 stopped-upgrade-634233 kubelet[6297]: E1025 21:59:25.808146    6297 pod_workers.go:191] Error syncing pod 603b914543a305bf066dc8de01ce2232 ("kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"
	W1025 21:59:44.599022  112102 out.go:239]   Oct 25 21:59:27 stopped-upgrade-634233 kubelet[6297]: E1025 21:59:27.816404    6297 pod_workers.go:191] Error syncing pod 603b914543a305bf066dc8de01ce2232 ("kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"
	  Oct 25 21:59:27 stopped-upgrade-634233 kubelet[6297]: E1025 21:59:27.816404    6297 pod_workers.go:191] Error syncing pod 603b914543a305bf066dc8de01ce2232 ("kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"
	W1025 21:59:44.599031  112102 out.go:239]   Oct 25 21:59:30 stopped-upgrade-634233 kubelet[6297]: E1025 21:59:30.889849    6297 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	  Oct 25 21:59:30 stopped-upgrade-634233 kubelet[6297]: E1025 21:59:30.889849    6297 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	W1025 21:59:44.599041  112102 out.go:239]   Oct 25 21:59:31 stopped-upgrade-634233 kubelet[6297]: E1025 21:59:31.919778    6297 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	  Oct 25 21:59:31 stopped-upgrade-634233 kubelet[6297]: E1025 21:59:31.919778    6297 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	W1025 21:59:44.599055  112102 out.go:239]   Oct 25 21:59:39 stopped-upgrade-634233 kubelet[6297]: E1025 21:59:39.483253    6297 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	  Oct 25 21:59:39 stopped-upgrade-634233 kubelet[6297]: E1025 21:59:39.483253    6297 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	I1025 21:59:44.599067  112102 out.go:309] Setting ErrFile to fd 2...
	I1025 21:59:44.599077  112102 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 21:59:54.599724  112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
	I1025 21:59:54.600379  112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
	I1025 21:59:54.600496  112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 21:59:54.637503  112102 logs.go:284] 1 containers: [6f3f3376dd08]
	I1025 21:59:54.637588  112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 21:59:54.677247  112102 logs.go:284] 2 containers: [351d1be3fc41 111a4f5088ac]
	I1025 21:59:54.677331  112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 21:59:54.713087  112102 logs.go:284] 0 containers: []
	W1025 21:59:54.713114  112102 logs.go:286] No container was found matching "coredns"
	I1025 21:59:54.713176  112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 21:59:54.753285  112102 logs.go:284] 2 containers: [8464245274b1 09fabc795729]
	I1025 21:59:54.753358  112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 21:59:54.792252  112102 logs.go:284] 0 containers: []
	W1025 21:59:54.792279  112102 logs.go:286] No container was found matching "kube-proxy"
	I1025 21:59:54.792343  112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 21:59:54.848606  112102 logs.go:284] 3 containers: [7792b9b4e0ee c83026fba0c7 53138481ecbd]
	I1025 21:59:54.848702  112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 21:59:54.890540  112102 logs.go:284] 0 containers: []
	W1025 21:59:54.890567  112102 logs.go:286] No container was found matching "kindnet"
	I1025 21:59:54.890588  112102 logs.go:123] Gathering logs for kubelet ...
	I1025 21:59:54.890605  112102 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1025 21:59:54.922924  112102 logs.go:138] Found kubelet problem: Oct 25 21:59:39 stopped-upgrade-634233 kubelet[6297]: E1025 21:59:39.483253    6297 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	W1025 21:59:54.955771  112102 logs.go:138] Found kubelet problem: Oct 25 21:59:52 stopped-upgrade-634233 kubelet[7903]: E1025 21:59:52.148352    7903 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	W1025 21:59:54.958127  112102 logs.go:138] Found kubelet problem: Oct 25 21:59:53 stopped-upgrade-634233 kubelet[7903]: E1025 21:59:53.158062    7903 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	I1025 21:59:54.960784  112102 logs.go:123] Gathering logs for dmesg ...
	I1025 21:59:54.960809  112102 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 21:59:54.972726  112102 logs.go:123] Gathering logs for kube-controller-manager [7792b9b4e0ee] ...
	I1025 21:59:54.972753  112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7792b9b4e0ee"
	I1025 21:59:55.016703  112102 logs.go:123] Gathering logs for kube-controller-manager [c83026fba0c7] ...
	I1025 21:59:55.016739  112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c83026fba0c7"
	I1025 21:59:55.063153  112102 logs.go:123] Gathering logs for kube-apiserver [6f3f3376dd08] ...
	I1025 21:59:55.063193  112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f3f3376dd08"
	I1025 21:59:55.132083  112102 logs.go:123] Gathering logs for etcd [351d1be3fc41] ...
	I1025 21:59:55.132120  112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 351d1be3fc41"
	I1025 21:59:55.213579  112102 logs.go:123] Gathering logs for container status ...
	I1025 21:59:55.213613  112102 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 21:59:55.240458  112102 logs.go:123] Gathering logs for kube-scheduler [8464245274b1] ...
	I1025 21:59:55.240500  112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8464245274b1"
	I1025 21:59:55.332880  112102 logs.go:123] Gathering logs for kube-scheduler [09fabc795729] ...
	I1025 21:59:55.332919  112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09fabc795729"
	I1025 21:59:55.374188  112102 logs.go:123] Gathering logs for Docker ...
	I1025 21:59:55.374225  112102 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 21:59:55.409677  112102 logs.go:123] Gathering logs for describe nodes ...
	I1025 21:59:55.409725  112102 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 21:59:55.491633  112102 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	error: tls: private key does not match public key
	 output: 
	** stderr ** 
	error: tls: private key does not match public key
	
	** /stderr **
	I1025 21:59:55.491657  112102 logs.go:123] Gathering logs for etcd [111a4f5088ac] ...
	I1025 21:59:55.491670  112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 111a4f5088ac"
	I1025 21:59:55.532938  112102 logs.go:123] Gathering logs for kube-controller-manager [53138481ecbd] ...
	I1025 21:59:55.532973  112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53138481ecbd"
	I1025 21:59:55.582374  112102 out.go:309] Setting ErrFile to fd 2...
	I1025 21:59:55.582410  112102 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1025 21:59:55.582481  112102 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1025 21:59:55.582497  112102 out.go:239]   Oct 25 21:59:39 stopped-upgrade-634233 kubelet[6297]: E1025 21:59:39.483253    6297 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	  Oct 25 21:59:39 stopped-upgrade-634233 kubelet[6297]: E1025 21:59:39.483253    6297 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	W1025 21:59:55.582509  112102 out.go:239]   Oct 25 21:59:52 stopped-upgrade-634233 kubelet[7903]: E1025 21:59:52.148352    7903 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	  Oct 25 21:59:52 stopped-upgrade-634233 kubelet[7903]: E1025 21:59:52.148352    7903 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	W1025 21:59:55.582520  112102 out.go:239]   Oct 25 21:59:53 stopped-upgrade-634233 kubelet[7903]: E1025 21:59:53.158062    7903 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	  Oct 25 21:59:53 stopped-upgrade-634233 kubelet[7903]: E1025 21:59:53.158062    7903 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	I1025 21:59:55.582530  112102 out.go:309] Setting ErrFile to fd 2...
	I1025 21:59:55.582539  112102 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 22:00:05.583121  112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
	I1025 22:00:05.583754  112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
	I1025 22:00:05.583869  112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 22:00:05.628720  112102 logs.go:284] 1 containers: [6f3f3376dd08]
	I1025 22:00:05.628818  112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 22:00:05.669782  112102 logs.go:284] 2 containers: [351d1be3fc41 111a4f5088ac]
	I1025 22:00:05.669878  112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 22:00:05.710788  112102 logs.go:284] 0 containers: []
	W1025 22:00:05.710815  112102 logs.go:286] No container was found matching "coredns"
	I1025 22:00:05.710876  112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 22:00:05.744925  112102 logs.go:284] 2 containers: [8464245274b1 09fabc795729]
	I1025 22:00:05.745017  112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 22:00:05.780243  112102 logs.go:284] 0 containers: []
	W1025 22:00:05.780275  112102 logs.go:286] No container was found matching "kube-proxy"
	I1025 22:00:05.780337  112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 22:00:05.816325  112102 logs.go:284] 2 containers: [7792b9b4e0ee 53138481ecbd]
	I1025 22:00:05.816428  112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 22:00:05.850408  112102 logs.go:284] 0 containers: []
	W1025 22:00:05.850431  112102 logs.go:286] No container was found matching "kindnet"
	I1025 22:00:05.850445  112102 logs.go:123] Gathering logs for container status ...
	I1025 22:00:05.850463  112102 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 22:00:05.873965  112102 logs.go:123] Gathering logs for kubelet ...
	I1025 22:00:05.874004  112102 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1025 22:00:05.909283  112102 logs.go:138] Found kubelet problem: Oct 25 21:59:52 stopped-upgrade-634233 kubelet[7903]: E1025 21:59:52.148352    7903 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	W1025 22:00:05.911140  112102 logs.go:138] Found kubelet problem: Oct 25 21:59:53 stopped-upgrade-634233 kubelet[7903]: E1025 21:59:53.158062    7903 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	W1025 22:00:05.919483  112102 logs.go:138] Found kubelet problem: Oct 25 21:59:58 stopped-upgrade-634233 kubelet[7903]: E1025 21:59:58.193072    7903 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	W1025 22:00:05.930738  112102 logs.go:138] Found kubelet problem: Oct 25 22:00:05 stopped-upgrade-634233 kubelet[7903]: E1025 22:00:05.313295    7903 pod_workers.go:191] Error syncing pod 603b914543a305bf066dc8de01ce2232 ("kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"
	I1025 22:00:05.931642  112102 logs.go:123] Gathering logs for etcd [351d1be3fc41] ...
	I1025 22:00:05.931661  112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 351d1be3fc41"
	I1025 22:00:05.979912  112102 logs.go:123] Gathering logs for etcd [111a4f5088ac] ...
	I1025 22:00:05.979952  112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 111a4f5088ac"
	I1025 22:00:06.018284  112102 logs.go:123] Gathering logs for kube-scheduler [8464245274b1] ...
	I1025 22:00:06.018323  112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8464245274b1"
	I1025 22:00:06.120746  112102 logs.go:123] Gathering logs for kube-scheduler [09fabc795729] ...
	I1025 22:00:06.120793  112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09fabc795729"
	I1025 22:00:06.168084  112102 logs.go:123] Gathering logs for kube-controller-manager [7792b9b4e0ee] ...
	I1025 22:00:06.168120  112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7792b9b4e0ee"
	I1025 22:00:06.205023  112102 logs.go:123] Gathering logs for kube-controller-manager [53138481ecbd] ...
	I1025 22:00:06.205059  112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53138481ecbd"
	I1025 22:00:06.268313  112102 logs.go:123] Gathering logs for Docker ...
	I1025 22:00:06.268349  112102 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 22:00:06.302210  112102 logs.go:123] Gathering logs for dmesg ...
	I1025 22:00:06.302242  112102 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 22:00:06.313419  112102 logs.go:123] Gathering logs for describe nodes ...
	I1025 22:00:06.313453  112102 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 22:00:06.388866  112102 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	error: tls: private key does not match public key
	 output: 
	** stderr ** 
	error: tls: private key does not match public key
	
	** /stderr **
	I1025 22:00:06.388912  112102 logs.go:123] Gathering logs for kube-apiserver [6f3f3376dd08] ...
	I1025 22:00:06.388930  112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f3f3376dd08"
	I1025 22:00:06.461096  112102 out.go:309] Setting ErrFile to fd 2...
	I1025 22:00:06.461126  112102 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1025 22:00:06.461189  112102 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1025 22:00:06.461204  112102 out.go:239]   Oct 25 21:59:52 stopped-upgrade-634233 kubelet[7903]: E1025 21:59:52.148352    7903 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	  Oct 25 21:59:52 stopped-upgrade-634233 kubelet[7903]: E1025 21:59:52.148352    7903 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	W1025 22:00:06.461213  112102 out.go:239]   Oct 25 21:59:53 stopped-upgrade-634233 kubelet[7903]: E1025 21:59:53.158062    7903 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	  Oct 25 21:59:53 stopped-upgrade-634233 kubelet[7903]: E1025 21:59:53.158062    7903 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	W1025 22:00:06.461222  112102 out.go:239]   Oct 25 21:59:58 stopped-upgrade-634233 kubelet[7903]: E1025 21:59:58.193072    7903 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	  Oct 25 21:59:58 stopped-upgrade-634233 kubelet[7903]: E1025 21:59:58.193072    7903 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	W1025 22:00:06.461228  112102 out.go:239]   Oct 25 22:00:05 stopped-upgrade-634233 kubelet[7903]: E1025 22:00:05.313295    7903 pod_workers.go:191] Error syncing pod 603b914543a305bf066dc8de01ce2232 ("kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"
	  Oct 25 22:00:05 stopped-upgrade-634233 kubelet[7903]: E1025 22:00:05.313295    7903 pod_workers.go:191] Error syncing pod 603b914543a305bf066dc8de01ce2232 ("kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"
	I1025 22:00:06.461238  112102 out.go:309] Setting ErrFile to fd 2...
	I1025 22:00:06.461246  112102 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 22:00:16.462648  112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
	I1025 22:00:16.463418  112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
	I1025 22:00:16.463519  112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 22:00:16.505496  112102 logs.go:284] 1 containers: [0512b49c1a2e]
	I1025 22:00:16.505584  112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 22:00:16.537874  112102 logs.go:284] 2 containers: [351d1be3fc41 111a4f5088ac]
	I1025 22:00:16.537979  112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 22:00:16.569920  112102 logs.go:284] 0 containers: []
	W1025 22:00:16.569947  112102 logs.go:286] No container was found matching "coredns"
	I1025 22:00:16.570030  112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 22:00:16.601152  112102 logs.go:284] 2 containers: [8464245274b1 09fabc795729]
	I1025 22:00:16.601239  112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 22:00:16.637702  112102 logs.go:284] 0 containers: []
	W1025 22:00:16.637729  112102 logs.go:286] No container was found matching "kube-proxy"
	I1025 22:00:16.637792  112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 22:00:16.673917  112102 logs.go:284] 2 containers: [7792b9b4e0ee 53138481ecbd]
	I1025 22:00:16.674009  112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 22:00:16.709847  112102 logs.go:284] 0 containers: []
	W1025 22:00:16.709877  112102 logs.go:286] No container was found matching "kindnet"
	I1025 22:00:16.709892  112102 logs.go:123] Gathering logs for etcd [111a4f5088ac] ...
	I1025 22:00:16.709914  112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 111a4f5088ac"
	I1025 22:00:16.753177  112102 logs.go:123] Gathering logs for kube-controller-manager [7792b9b4e0ee] ...
	I1025 22:00:16.753213  112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7792b9b4e0ee"
	I1025 22:00:16.793850  112102 logs.go:123] Gathering logs for kube-controller-manager [53138481ecbd] ...
	I1025 22:00:16.793895  112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53138481ecbd"
	I1025 22:00:16.842129  112102 logs.go:123] Gathering logs for container status ...
	I1025 22:00:16.842162  112102 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 22:00:16.860456  112102 logs.go:123] Gathering logs for kubelet ...
	I1025 22:00:16.860487  112102 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1025 22:00:16.880660  112102 logs.go:138] Found kubelet problem: Oct 25 21:59:53 stopped-upgrade-634233 kubelet[7903]: E1025 21:59:53.158062    7903 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	W1025 22:00:16.896481  112102 logs.go:138] Found kubelet problem: Oct 25 21:59:58 stopped-upgrade-634233 kubelet[7903]: E1025 21:59:58.193072    7903 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	W1025 22:00:16.909020  112102 logs.go:138] Found kubelet problem: Oct 25 22:00:05 stopped-upgrade-634233 kubelet[7903]: E1025 22:00:05.313295    7903 pod_workers.go:191] Error syncing pod 603b914543a305bf066dc8de01ce2232 ("kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"
	W1025 22:00:16.918759  112102 logs.go:138] Found kubelet problem: Oct 25 22:00:11 stopped-upgrade-634233 kubelet[7903]: E1025 22:00:11.353514    7903 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	W1025 22:00:16.920836  112102 logs.go:138] Found kubelet problem: Oct 25 22:00:12 stopped-upgrade-634233 kubelet[7903]: E1025 22:00:12.560217    7903 pod_workers.go:191] Error syncing pod 603b914543a305bf066dc8de01ce2232 ("kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"
	I1025 22:00:16.927678  112102 logs.go:123] Gathering logs for dmesg ...
	I1025 22:00:16.927700  112102 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 22:00:16.939653  112102 logs.go:123] Gathering logs for describe nodes ...
	I1025 22:00:16.939685  112102 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 22:00:17.008883  112102 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	error: tls: private key does not match public key
	 output: 
	** stderr ** 
	error: tls: private key does not match public key
	
	** /stderr **
	I1025 22:00:17.008908  112102 logs.go:123] Gathering logs for kube-apiserver [0512b49c1a2e] ...
	I1025 22:00:17.008928  112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0512b49c1a2e"
	I1025 22:00:17.088124  112102 logs.go:123] Gathering logs for etcd [351d1be3fc41] ...
	I1025 22:00:17.088157  112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 351d1be3fc41"
	I1025 22:00:17.126843  112102 logs.go:123] Gathering logs for kube-scheduler [8464245274b1] ...
	I1025 22:00:17.126887  112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8464245274b1"
	I1025 22:00:17.236070  112102 logs.go:123] Gathering logs for kube-scheduler [09fabc795729] ...
	I1025 22:00:17.236109  112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09fabc795729"
	I1025 22:00:17.281446  112102 logs.go:123] Gathering logs for Docker ...
	I1025 22:00:17.281485  112102 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 22:00:17.319438  112102 out.go:309] Setting ErrFile to fd 2...
	I1025 22:00:17.319471  112102 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1025 22:00:17.319527  112102 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1025 22:00:17.319535  112102 out.go:239]   Oct 25 21:59:53 stopped-upgrade-634233 kubelet[7903]: E1025 21:59:53.158062    7903 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	  Oct 25 21:59:53 stopped-upgrade-634233 kubelet[7903]: E1025 21:59:53.158062    7903 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	W1025 22:00:17.319552  112102 out.go:239]   Oct 25 21:59:58 stopped-upgrade-634233 kubelet[7903]: E1025 21:59:58.193072    7903 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	  Oct 25 21:59:58 stopped-upgrade-634233 kubelet[7903]: E1025 21:59:58.193072    7903 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	W1025 22:00:17.319558  112102 out.go:239]   Oct 25 22:00:05 stopped-upgrade-634233 kubelet[7903]: E1025 22:00:05.313295    7903 pod_workers.go:191] Error syncing pod 603b914543a305bf066dc8de01ce2232 ("kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"
	  Oct 25 22:00:05 stopped-upgrade-634233 kubelet[7903]: E1025 22:00:05.313295    7903 pod_workers.go:191] Error syncing pod 603b914543a305bf066dc8de01ce2232 ("kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"
	W1025 22:00:17.319569  112102 out.go:239]   Oct 25 22:00:11 stopped-upgrade-634233 kubelet[7903]: E1025 22:00:11.353514    7903 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	  Oct 25 22:00:11 stopped-upgrade-634233 kubelet[7903]: E1025 22:00:11.353514    7903 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	W1025 22:00:17.319580  112102 out.go:239]   Oct 25 22:00:12 stopped-upgrade-634233 kubelet[7903]: E1025 22:00:12.560217    7903 pod_workers.go:191] Error syncing pod 603b914543a305bf066dc8de01ce2232 ("kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"
	  Oct 25 22:00:12 stopped-upgrade-634233 kubelet[7903]: E1025 22:00:12.560217    7903 pod_workers.go:191] Error syncing pod 603b914543a305bf066dc8de01ce2232 ("kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"
	I1025 22:00:17.319597  112102 out.go:309] Setting ErrFile to fd 2...
	I1025 22:00:17.319605  112102 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 22:00:27.320410  112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
	I1025 22:00:27.321010  112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
	I1025 22:00:27.321092  112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 22:00:27.356154  112102 logs.go:284] 1 containers: [0512b49c1a2e]
	I1025 22:00:27.356218  112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 22:00:27.393164  112102 logs.go:284] 2 containers: [351d1be3fc41 111a4f5088ac]
	I1025 22:00:27.393254  112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 22:00:27.426940  112102 logs.go:284] 0 containers: []
	W1025 22:00:27.426962  112102 logs.go:286] No container was found matching "coredns"
	I1025 22:00:27.427010  112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 22:00:27.461064  112102 logs.go:284] 2 containers: [8464245274b1 09fabc795729]
	I1025 22:00:27.461150  112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 22:00:27.499676  112102 logs.go:284] 0 containers: []
	W1025 22:00:27.499708  112102 logs.go:286] No container was found matching "kube-proxy"
	I1025 22:00:27.499771  112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 22:00:27.531782  112102 logs.go:284] 3 containers: [16645aa4516e 7792b9b4e0ee 53138481ecbd]
	I1025 22:00:27.531865  112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 22:00:27.561803  112102 logs.go:284] 0 containers: []
	W1025 22:00:27.561832  112102 logs.go:286] No container was found matching "kindnet"
	I1025 22:00:27.561851  112102 logs.go:123] Gathering logs for describe nodes ...
	I1025 22:00:27.561869  112102 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 22:00:27.636933  112102 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	error: tls: private key does not match public key
	 output: 
	** stderr ** 
	error: tls: private key does not match public key
	
	** /stderr **
	I1025 22:00:27.636957  112102 logs.go:123] Gathering logs for kube-apiserver [0512b49c1a2e] ...
	I1025 22:00:27.636968  112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0512b49c1a2e"
	I1025 22:00:27.705703  112102 logs.go:123] Gathering logs for etcd [351d1be3fc41] ...
	I1025 22:00:27.705740  112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 351d1be3fc41"
	I1025 22:00:27.750043  112102 logs.go:123] Gathering logs for kube-scheduler [09fabc795729] ...
	I1025 22:00:27.750071  112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09fabc795729"
	I1025 22:00:27.797353  112102 logs.go:123] Gathering logs for dmesg ...
	I1025 22:00:27.797398  112102 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 22:00:27.806863  112102 logs.go:123] Gathering logs for etcd [111a4f5088ac] ...
	I1025 22:00:27.806894  112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 111a4f5088ac"
	I1025 22:00:27.855830  112102 logs.go:123] Gathering logs for kubelet ...
	I1025 22:00:27.855863  112102 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1025 22:00:27.886260  112102 logs.go:138] Found kubelet problem: Oct 25 22:00:05 stopped-upgrade-634233 kubelet[7903]: E1025 22:00:05.313295    7903 pod_workers.go:191] Error syncing pod 603b914543a305bf066dc8de01ce2232 ("kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"
	W1025 22:00:27.901849  112102 logs.go:138] Found kubelet problem: Oct 25 22:00:11 stopped-upgrade-634233 kubelet[7903]: E1025 22:00:11.353514    7903 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	W1025 22:00:27.903887  112102 logs.go:138] Found kubelet problem: Oct 25 22:00:12 stopped-upgrade-634233 kubelet[7903]: E1025 22:00:12.560217    7903 pod_workers.go:191] Error syncing pod 603b914543a305bf066dc8de01ce2232 ("kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"
	W1025 22:00:27.913116  112102 logs.go:138] Found kubelet problem: Oct 25 22:00:18 stopped-upgrade-634233 kubelet[7903]: E1025 22:00:18.187708    7903 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	I1025 22:00:27.926982  112102 logs.go:123] Gathering logs for kube-controller-manager [16645aa4516e] ...
	I1025 22:00:27.927014  112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16645aa4516e"
	I1025 22:00:27.966796  112102 logs.go:123] Gathering logs for Docker ...
	I1025 22:00:27.966830  112102 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 22:00:27.998909  112102 logs.go:123] Gathering logs for container status ...
	I1025 22:00:27.998948  112102 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 22:00:29.018614  112102 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (1.01964037s)
	I1025 22:00:29.019223  112102 logs.go:123] Gathering logs for kube-scheduler [8464245274b1] ...
	I1025 22:00:29.019238  112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8464245274b1"
	I1025 22:00:29.117653  112102 logs.go:123] Gathering logs for kube-controller-manager [7792b9b4e0ee] ...
	I1025 22:00:29.117691  112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7792b9b4e0ee"
	W1025 22:00:29.199848  112102 logs.go:130] failed kube-controller-manager [7792b9b4e0ee]: command: /bin/bash -c "docker logs --tail 400 7792b9b4e0ee" /bin/bash -c "docker logs --tail 400 7792b9b4e0ee": Process exited with status 1
	stdout:
	
	stderr:
	Error: No such container: 7792b9b4e0ee
	 output: 
	** stderr ** 
	Error: No such container: 7792b9b4e0ee
	
	** /stderr **
	I1025 22:00:29.199869  112102 logs.go:123] Gathering logs for kube-controller-manager [53138481ecbd] ...
	I1025 22:00:29.199880  112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53138481ecbd"
	I1025 22:00:29.285044  112102 out.go:309] Setting ErrFile to fd 2...
	I1025 22:00:29.285078  112102 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1025 22:00:29.285137  112102 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1025 22:00:29.285147  112102 out.go:239]   Oct 25 22:00:05 stopped-upgrade-634233 kubelet[7903]: E1025 22:00:05.313295    7903 pod_workers.go:191] Error syncing pod 603b914543a305bf066dc8de01ce2232 ("kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"
	  Oct 25 22:00:05 stopped-upgrade-634233 kubelet[7903]: E1025 22:00:05.313295    7903 pod_workers.go:191] Error syncing pod 603b914543a305bf066dc8de01ce2232 ("kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"
	W1025 22:00:29.285155  112102 out.go:239]   Oct 25 22:00:11 stopped-upgrade-634233 kubelet[7903]: E1025 22:00:11.353514    7903 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	  Oct 25 22:00:11 stopped-upgrade-634233 kubelet[7903]: E1025 22:00:11.353514    7903 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	W1025 22:00:29.285161  112102 out.go:239]   Oct 25 22:00:12 stopped-upgrade-634233 kubelet[7903]: E1025 22:00:12.560217    7903 pod_workers.go:191] Error syncing pod 603b914543a305bf066dc8de01ce2232 ("kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"
	  Oct 25 22:00:12 stopped-upgrade-634233 kubelet[7903]: E1025 22:00:12.560217    7903 pod_workers.go:191] Error syncing pod 603b914543a305bf066dc8de01ce2232 ("kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"
	W1025 22:00:29.285166  112102 out.go:239]   Oct 25 22:00:18 stopped-upgrade-634233 kubelet[7903]: E1025 22:00:18.187708    7903 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	  Oct 25 22:00:18 stopped-upgrade-634233 kubelet[7903]: E1025 22:00:18.187708    7903 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	I1025 22:00:29.285172  112102 out.go:309] Setting ErrFile to fd 2...
	I1025 22:00:29.285178  112102 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 22:00:39.286616  112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
	I1025 22:00:39.287364  112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
	I1025 22:00:39.287477  112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 22:00:39.321781  112102 logs.go:284] 1 containers: [4020488488c9]
	I1025 22:00:39.321867  112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 22:00:39.352191  112102 logs.go:284] 2 containers: [351d1be3fc41 111a4f5088ac]
	I1025 22:00:39.352296  112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 22:00:39.382433  112102 logs.go:284] 0 containers: []
	W1025 22:00:39.382465  112102 logs.go:286] No container was found matching "coredns"
	I1025 22:00:39.382525  112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 22:00:39.411537  112102 logs.go:284] 2 containers: [8464245274b1 09fabc795729]
	I1025 22:00:39.411626  112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 22:00:39.439791  112102 logs.go:284] 0 containers: []
	W1025 22:00:39.439815  112102 logs.go:286] No container was found matching "kube-proxy"
	I1025 22:00:39.439879  112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 22:00:39.473548  112102 logs.go:284] 3 containers: [e1d2be52be40 16645aa4516e 53138481ecbd]
	I1025 22:00:39.473640  112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 22:00:39.513005  112102 logs.go:284] 0 containers: []
	W1025 22:00:39.513037  112102 logs.go:286] No container was found matching "kindnet"
	I1025 22:00:39.513054  112102 logs.go:123] Gathering logs for etcd [111a4f5088ac] ...
	I1025 22:00:39.513068  112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 111a4f5088ac"
	I1025 22:00:39.551327  112102 logs.go:123] Gathering logs for kube-scheduler [09fabc795729] ...
	I1025 22:00:39.551357  112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09fabc795729"
	I1025 22:00:39.594523  112102 logs.go:123] Gathering logs for kube-controller-manager [16645aa4516e] ...
	I1025 22:00:39.594562  112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16645aa4516e"
	I1025 22:00:39.631307  112102 logs.go:123] Gathering logs for Docker ...
	I1025 22:00:39.631338  112102 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 22:00:39.664002  112102 logs.go:123] Gathering logs for kubelet ...
	I1025 22:00:39.664033  112102 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1025 22:00:39.705088  112102 logs.go:138] Found kubelet problem: Oct 25 22:00:30 stopped-upgrade-634233 kubelet[9704]: E1025 22:00:30.571064    9704 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	W1025 22:00:39.706759  112102 logs.go:138] Found kubelet problem: Oct 25 22:00:31 stopped-upgrade-634233 kubelet[9704]: E1025 22:00:31.572055    9704 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	W1025 22:00:39.708596  112102 logs.go:138] Found kubelet problem: Oct 25 22:00:32 stopped-upgrade-634233 kubelet[9704]: E1025 22:00:32.570796    9704 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	I1025 22:00:39.719904  112102 logs.go:123] Gathering logs for kube-apiserver [4020488488c9] ...
	I1025 22:00:39.719929  112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4020488488c9"
	I1025 22:00:39.775543  112102 logs.go:123] Gathering logs for etcd [351d1be3fc41] ...
	I1025 22:00:39.775575  112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 351d1be3fc41"
	I1025 22:00:39.814971  112102 logs.go:123] Gathering logs for container status ...
	I1025 22:00:39.815000  112102 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 22:00:39.836093  112102 logs.go:123] Gathering logs for kube-scheduler [8464245274b1] ...
	I1025 22:00:39.836121  112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8464245274b1"
	I1025 22:00:39.933410  112102 logs.go:123] Gathering logs for kube-controller-manager [e1d2be52be40] ...
	I1025 22:00:39.933450  112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1d2be52be40"
	I1025 22:00:39.971070  112102 logs.go:123] Gathering logs for kube-controller-manager [53138481ecbd] ...
	I1025 22:00:39.971099  112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53138481ecbd"
	I1025 22:00:40.012071  112102 logs.go:123] Gathering logs for dmesg ...
	I1025 22:00:40.012108  112102 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 22:00:40.021130  112102 logs.go:123] Gathering logs for describe nodes ...
	I1025 22:00:40.021154  112102 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 22:00:40.084858  112102 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	error: tls: private key does not match public key
	 output: 
	** stderr ** 
	error: tls: private key does not match public key
	
	** /stderr **
	I1025 22:00:40.084893  112102 out.go:309] Setting ErrFile to fd 2...
	I1025 22:00:40.084907  112102 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1025 22:00:40.084959  112102 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1025 22:00:40.084971  112102 out.go:239]   Oct 25 22:00:30 stopped-upgrade-634233 kubelet[9704]: E1025 22:00:30.571064    9704 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	  Oct 25 22:00:30 stopped-upgrade-634233 kubelet[9704]: E1025 22:00:30.571064    9704 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	W1025 22:00:40.084978  112102 out.go:239]   Oct 25 22:00:31 stopped-upgrade-634233 kubelet[9704]: E1025 22:00:31.572055    9704 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	  Oct 25 22:00:31 stopped-upgrade-634233 kubelet[9704]: E1025 22:00:31.572055    9704 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	W1025 22:00:40.084984  112102 out.go:239]   Oct 25 22:00:32 stopped-upgrade-634233 kubelet[9704]: E1025 22:00:32.570796    9704 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	  Oct 25 22:00:32 stopped-upgrade-634233 kubelet[9704]: E1025 22:00:32.570796    9704 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	I1025 22:00:40.084989  112102 out.go:309] Setting ErrFile to fd 2...
	I1025 22:00:40.084994  112102 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 22:00:50.086071  112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
	I1025 22:00:50.086710  112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
	I1025 22:00:50.086792  112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 22:00:50.119920  112102 logs.go:284] 1 containers: [064ea6f86a9c]
	I1025 22:00:50.120014  112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 22:00:50.150402  112102 logs.go:284] 2 containers: [351d1be3fc41 111a4f5088ac]
	I1025 22:00:50.150490  112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 22:00:50.181431  112102 logs.go:284] 0 containers: []
	W1025 22:00:50.181468  112102 logs.go:286] No container was found matching "coredns"
	I1025 22:00:50.181529  112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 22:00:50.212237  112102 logs.go:284] 2 containers: [8464245274b1 09fabc795729]
	I1025 22:00:50.212354  112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 22:00:50.242512  112102 logs.go:284] 0 containers: []
	W1025 22:00:50.242540  112102 logs.go:286] No container was found matching "kube-proxy"
	I1025 22:00:50.242605  112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 22:00:50.275176  112102 logs.go:284] 4 containers: [1dd89316adc1 e1d2be52be40 16645aa4516e 53138481ecbd]
	I1025 22:00:50.275275  112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 22:00:50.306176  112102 logs.go:284] 0 containers: []
	W1025 22:00:50.306206  112102 logs.go:286] No container was found matching "kindnet"
	I1025 22:00:50.306221  112102 logs.go:123] Gathering logs for kube-controller-manager [16645aa4516e] ...
	I1025 22:00:50.306234  112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16645aa4516e"
	I1025 22:00:50.341164  112102 logs.go:123] Gathering logs for container status ...
	I1025 22:00:50.341202  112102 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 22:00:50.359484  112102 logs.go:123] Gathering logs for dmesg ...
	I1025 22:00:50.359509  112102 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 22:00:50.369107  112102 logs.go:123] Gathering logs for kube-apiserver [064ea6f86a9c] ...
	I1025 22:00:50.369133  112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 064ea6f86a9c"
	I1025 22:00:50.431383  112102 logs.go:123] Gathering logs for kube-controller-manager [e1d2be52be40] ...
	I1025 22:00:50.431434  112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1d2be52be40"
	I1025 22:00:50.466028  112102 logs.go:123] Gathering logs for describe nodes ...
	I1025 22:00:50.466059  112102 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 22:00:50.546745  112102 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	error: tls: private key does not match public key
	 output: 
	** stderr ** 
	error: tls: private key does not match public key
	
	** /stderr **
	I1025 22:00:50.546767  112102 logs.go:123] Gathering logs for kube-scheduler [09fabc795729] ...
	I1025 22:00:50.546780  112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09fabc795729"
	I1025 22:00:50.593269  112102 logs.go:123] Gathering logs for kube-controller-manager [1dd89316adc1] ...
	I1025 22:00:50.593302  112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1dd89316adc1"
	I1025 22:00:50.631964  112102 logs.go:123] Gathering logs for kube-scheduler [8464245274b1] ...
	I1025 22:00:50.631992  112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8464245274b1"
	I1025 22:00:50.743868  112102 logs.go:123] Gathering logs for Docker ...
	I1025 22:00:50.743909  112102 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 22:00:50.780512  112102 logs.go:123] Gathering logs for kubelet ...
	I1025 22:00:50.780544  112102 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1025 22:00:50.804502  112102 logs.go:138] Found kubelet problem: Oct 25 22:00:30 stopped-upgrade-634233 kubelet[9704]: E1025 22:00:30.571064    9704 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	W1025 22:00:50.806204  112102 logs.go:138] Found kubelet problem: Oct 25 22:00:31 stopped-upgrade-634233 kubelet[9704]: E1025 22:00:31.572055    9704 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	W1025 22:00:50.808112  112102 logs.go:138] Found kubelet problem: Oct 25 22:00:32 stopped-upgrade-634233 kubelet[9704]: E1025 22:00:32.570796    9704 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	W1025 22:00:50.831673  112102 logs.go:138] Found kubelet problem: Oct 25 22:00:47 stopped-upgrade-634233 kubelet[9704]: E1025 22:00:47.751362    9704 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	W1025 22:00:50.833912  112102 logs.go:138] Found kubelet problem: Oct 25 22:00:48 stopped-upgrade-634233 kubelet[9704]: E1025 22:00:48.773179    9704 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	W1025 22:00:50.837162  112102 logs.go:138] Found kubelet problem: Oct 25 22:00:50 stopped-upgrade-634233 kubelet[9704]: E1025 22:00:50.655030    9704 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	I1025 22:00:50.837482  112102 logs.go:123] Gathering logs for etcd [351d1be3fc41] ...
	I1025 22:00:50.837504  112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 351d1be3fc41"
	I1025 22:00:50.875462  112102 logs.go:123] Gathering logs for etcd [111a4f5088ac] ...
	I1025 22:00:50.875491  112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 111a4f5088ac"
	I1025 22:00:50.916189  112102 logs.go:123] Gathering logs for kube-controller-manager [53138481ecbd] ...
	I1025 22:00:50.916241  112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53138481ecbd"
	I1025 22:00:50.963419  112102 out.go:309] Setting ErrFile to fd 2...
	I1025 22:00:50.963445  112102 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1025 22:00:50.963497  112102 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1025 22:00:50.963512  112102 out.go:239]   Oct 25 22:00:31 stopped-upgrade-634233 kubelet[9704]: E1025 22:00:31.572055    9704 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	  Oct 25 22:00:31 stopped-upgrade-634233 kubelet[9704]: E1025 22:00:31.572055    9704 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	W1025 22:00:50.963530  112102 out.go:239]   Oct 25 22:00:32 stopped-upgrade-634233 kubelet[9704]: E1025 22:00:32.570796    9704 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	  Oct 25 22:00:32 stopped-upgrade-634233 kubelet[9704]: E1025 22:00:32.570796    9704 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	W1025 22:00:50.963541  112102 out.go:239]   Oct 25 22:00:47 stopped-upgrade-634233 kubelet[9704]: E1025 22:00:47.751362    9704 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	  Oct 25 22:00:47 stopped-upgrade-634233 kubelet[9704]: E1025 22:00:47.751362    9704 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	W1025 22:00:50.963553  112102 out.go:239]   Oct 25 22:00:48 stopped-upgrade-634233 kubelet[9704]: E1025 22:00:48.773179    9704 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	  Oct 25 22:00:48 stopped-upgrade-634233 kubelet[9704]: E1025 22:00:48.773179    9704 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	W1025 22:00:50.963571  112102 out.go:239]   Oct 25 22:00:50 stopped-upgrade-634233 kubelet[9704]: E1025 22:00:50.655030    9704 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	  Oct 25 22:00:50 stopped-upgrade-634233 kubelet[9704]: E1025 22:00:50.655030    9704 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	I1025 22:00:50.963583  112102 out.go:309] Setting ErrFile to fd 2...
	I1025 22:00:50.963596  112102 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 22:01:00.965228  112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
	I1025 22:01:00.965924  112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
	I1025 22:01:00.966013  112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 22:01:00.999637  112102 logs.go:284] 1 containers: [064ea6f86a9c]
	I1025 22:01:00.999729  112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 22:01:01.032956  112102 logs.go:284] 2 containers: [351d1be3fc41 111a4f5088ac]
	I1025 22:01:01.033045  112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 22:01:01.063708  112102 logs.go:284] 0 containers: []
	W1025 22:01:01.063735  112102 logs.go:286] No container was found matching "coredns"
	I1025 22:01:01.063793  112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 22:01:01.096877  112102 logs.go:284] 2 containers: [8464245274b1 09fabc795729]
	I1025 22:01:01.096958  112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 22:01:01.127378  112102 logs.go:284] 0 containers: []
	W1025 22:01:01.127407  112102 logs.go:286] No container was found matching "kube-proxy"
	I1025 22:01:01.127469  112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 22:01:01.159108  112102 logs.go:284] 3 containers: [1dd89316adc1 16645aa4516e 53138481ecbd]
	I1025 22:01:01.159202  112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 22:01:01.190113  112102 logs.go:284] 0 containers: []
	W1025 22:01:01.190137  112102 logs.go:286] No container was found matching "kindnet"
	I1025 22:01:01.190157  112102 logs.go:123] Gathering logs for container status ...
	I1025 22:01:01.190169  112102 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 22:01:01.207588  112102 logs.go:123] Gathering logs for kube-scheduler [8464245274b1] ...
	I1025 22:01:01.207619  112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8464245274b1"
	I1025 22:01:01.306531  112102 logs.go:123] Gathering logs for kube-controller-manager [16645aa4516e] ...
	I1025 22:01:01.306571  112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16645aa4516e"
	I1025 22:01:01.343259  112102 logs.go:123] Gathering logs for Docker ...
	I1025 22:01:01.343292  112102 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 22:01:01.375391  112102 logs.go:123] Gathering logs for kube-controller-manager [1dd89316adc1] ...
	I1025 22:01:01.375423  112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1dd89316adc1"
	I1025 22:01:01.410399  112102 logs.go:123] Gathering logs for kube-controller-manager [53138481ecbd] ...
	I1025 22:01:01.410427  112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53138481ecbd"
	I1025 22:01:01.450148  112102 logs.go:123] Gathering logs for kubelet ...
	I1025 22:01:01.450179  112102 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1025 22:01:01.481256  112102 logs.go:138] Found kubelet problem: Oct 25 22:00:47 stopped-upgrade-634233 kubelet[9704]: E1025 22:00:47.751362    9704 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	W1025 22:01:01.483506  112102 logs.go:138] Found kubelet problem: Oct 25 22:00:48 stopped-upgrade-634233 kubelet[9704]: E1025 22:00:48.773179    9704 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	W1025 22:01:01.486382  112102 logs.go:138] Found kubelet problem: Oct 25 22:00:50 stopped-upgrade-634233 kubelet[9704]: E1025 22:00:50.655030    9704 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	W1025 22:01:01.489918  112102 logs.go:138] Found kubelet problem: Oct 25 22:00:52 stopped-upgrade-634233 kubelet[9704]: E1025 22:00:52.826034    9704 pod_workers.go:191] Error syncing pod 603b914543a305bf066dc8de01ce2232 ("kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"
	W1025 22:01:01.499230  112102 logs.go:138] Found kubelet problem: Oct 25 22:00:58 stopped-upgrade-634233 kubelet[9704]: E1025 22:00:58.651724    9704 pod_workers.go:191] Error syncing pod 603b914543a305bf066dc8de01ce2232 ("kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"
	I1025 22:01:01.503823  112102 logs.go:123] Gathering logs for kube-apiserver [064ea6f86a9c] ...
	I1025 22:01:01.503841  112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 064ea6f86a9c"
	I1025 22:01:01.564756  112102 logs.go:123] Gathering logs for etcd [111a4f5088ac] ...
	I1025 22:01:01.564790  112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 111a4f5088ac"
	I1025 22:01:01.605724  112102 logs.go:123] Gathering logs for describe nodes ...
	I1025 22:01:01.605756  112102 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 22:01:01.675136  112102 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	error: tls: private key does not match public key
	 output: 
	** stderr ** 
	error: tls: private key does not match public key
	
	** /stderr **
	I1025 22:01:01.675157  112102 logs.go:123] Gathering logs for etcd [351d1be3fc41] ...
	I1025 22:01:01.675168  112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 351d1be3fc41"
	I1025 22:01:01.713573  112102 logs.go:123] Gathering logs for kube-scheduler [09fabc795729] ...
	I1025 22:01:01.713612  112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09fabc795729"
	I1025 22:01:01.755029  112102 logs.go:123] Gathering logs for dmesg ...
	I1025 22:01:01.755064  112102 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 22:01:01.765173  112102 out.go:309] Setting ErrFile to fd 2...
	I1025 22:01:01.765195  112102 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1025 22:01:01.765248  112102 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1025 22:01:01.765260  112102 out.go:239]   Oct 25 22:00:47 stopped-upgrade-634233 kubelet[9704]: E1025 22:00:47.751362    9704 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	  Oct 25 22:00:47 stopped-upgrade-634233 kubelet[9704]: E1025 22:00:47.751362    9704 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	W1025 22:01:01.765268  112102 out.go:239]   Oct 25 22:00:48 stopped-upgrade-634233 kubelet[9704]: E1025 22:00:48.773179    9704 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	  Oct 25 22:00:48 stopped-upgrade-634233 kubelet[9704]: E1025 22:00:48.773179    9704 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	W1025 22:01:01.765277  112102 out.go:239]   Oct 25 22:00:50 stopped-upgrade-634233 kubelet[9704]: E1025 22:00:50.655030    9704 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	  Oct 25 22:00:50 stopped-upgrade-634233 kubelet[9704]: E1025 22:00:50.655030    9704 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	W1025 22:01:01.765283  112102 out.go:239]   Oct 25 22:00:52 stopped-upgrade-634233 kubelet[9704]: E1025 22:00:52.826034    9704 pod_workers.go:191] Error syncing pod 603b914543a305bf066dc8de01ce2232 ("kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"
	  Oct 25 22:00:52 stopped-upgrade-634233 kubelet[9704]: E1025 22:00:52.826034    9704 pod_workers.go:191] Error syncing pod 603b914543a305bf066dc8de01ce2232 ("kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"
	W1025 22:01:01.765293  112102 out.go:239]   Oct 25 22:00:58 stopped-upgrade-634233 kubelet[9704]: E1025 22:00:58.651724    9704 pod_workers.go:191] Error syncing pod 603b914543a305bf066dc8de01ce2232 ("kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"
	  Oct 25 22:00:58 stopped-upgrade-634233 kubelet[9704]: E1025 22:00:58.651724    9704 pod_workers.go:191] Error syncing pod 603b914543a305bf066dc8de01ce2232 ("kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"
	I1025 22:01:01.765302  112102 out.go:309] Setting ErrFile to fd 2...
	I1025 22:01:01.765309  112102 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 22:01:11.766605  112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
	I1025 22:01:11.767261  112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
	I1025 22:01:11.767355  112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 22:01:11.806406  112102 logs.go:284] 1 containers: [cdbdd0260197]
	I1025 22:01:11.806508  112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 22:01:11.839324  112102 logs.go:284] 2 containers: [351d1be3fc41 111a4f5088ac]
	I1025 22:01:11.839395  112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 22:01:11.869758  112102 logs.go:284] 0 containers: []
	W1025 22:01:11.869780  112102 logs.go:286] No container was found matching "coredns"
	I1025 22:01:11.869834  112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 22:01:11.905099  112102 logs.go:284] 2 containers: [8464245274b1 09fabc795729]
	I1025 22:01:11.905198  112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 22:01:11.937409  112102 logs.go:284] 0 containers: []
	W1025 22:01:11.937432  112102 logs.go:286] No container was found matching "kube-proxy"
	I1025 22:01:11.937490  112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 22:01:11.975032  112102 logs.go:284] 3 containers: [8573e3b0daef 1dd89316adc1 53138481ecbd]
	I1025 22:01:11.975131  112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 22:01:12.016243  112102 logs.go:284] 0 containers: []
	W1025 22:01:12.016266  112102 logs.go:286] No container was found matching "kindnet"
	I1025 22:01:12.016282  112102 logs.go:123] Gathering logs for kubelet ...
	I1025 22:01:12.016295  112102 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1025 22:01:12.040154  112102 logs.go:138] Found kubelet problem: Oct 25 22:00:52 stopped-upgrade-634233 kubelet[9704]: E1025 22:00:52.826034    9704 pod_workers.go:191] Error syncing pod 603b914543a305bf066dc8de01ce2232 ("kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"
	W1025 22:01:12.050137  112102 logs.go:138] Found kubelet problem: Oct 25 22:00:58 stopped-upgrade-634233 kubelet[9704]: E1025 22:00:58.651724    9704 pod_workers.go:191] Error syncing pod 603b914543a305bf066dc8de01ce2232 ("kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"
	W1025 22:01:12.058461  112102 logs.go:138] Found kubelet problem: Oct 25 22:01:03 stopped-upgrade-634233 kubelet[9704]: E1025 22:01:03.329959    9704 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	W1025 22:01:12.078581  112102 logs.go:138] Found kubelet problem: Oct 25 22:01:09 stopped-upgrade-634233 kubelet[11558]: E1025 22:01:09.670566   11558 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	I1025 22:01:12.082448  112102 logs.go:123] Gathering logs for dmesg ...
	I1025 22:01:12.082479  112102 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 22:01:12.092465  112102 logs.go:123] Gathering logs for Docker ...
	I1025 22:01:12.092502  112102 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 22:01:12.130925  112102 logs.go:123] Gathering logs for kube-controller-manager [1dd89316adc1] ...
	I1025 22:01:12.130956  112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1dd89316adc1"
	I1025 22:01:12.173734  112102 logs.go:123] Gathering logs for kube-controller-manager [53138481ecbd] ...
	I1025 22:01:12.173772  112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53138481ecbd"
	I1025 22:01:12.227355  112102 logs.go:123] Gathering logs for container status ...
	I1025 22:01:12.227397  112102 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 22:01:12.253553  112102 logs.go:123] Gathering logs for kube-apiserver [cdbdd0260197] ...
	I1025 22:01:12.253591  112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdbdd0260197"
	I1025 22:01:12.314333  112102 logs.go:123] Gathering logs for kube-controller-manager [8573e3b0daef] ...
	I1025 22:01:12.314370  112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8573e3b0daef"
	I1025 22:01:12.351226  112102 logs.go:123] Gathering logs for etcd [111a4f5088ac] ...
	I1025 22:01:12.351260  112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 111a4f5088ac"
	I1025 22:01:12.401942  112102 logs.go:123] Gathering logs for kube-scheduler [8464245274b1] ...
	I1025 22:01:12.401999  112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8464245274b1"
	I1025 22:01:12.504610  112102 logs.go:123] Gathering logs for kube-scheduler [09fabc795729] ...
	I1025 22:01:12.504651  112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09fabc795729"
	I1025 22:01:12.545952  112102 logs.go:123] Gathering logs for describe nodes ...
	I1025 22:01:12.545984  112102 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 22:01:12.612533  112102 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	error: tls: private key does not match public key
	 output: 
	** stderr ** 
	error: tls: private key does not match public key
	
	** /stderr **
	I1025 22:01:12.612562  112102 logs.go:123] Gathering logs for etcd [351d1be3fc41] ...
	I1025 22:01:12.612576  112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 351d1be3fc41"
	I1025 22:01:12.654706  112102 out.go:309] Setting ErrFile to fd 2...
	I1025 22:01:12.654741  112102 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1025 22:01:12.654812  112102 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1025 22:01:12.654838  112102 out.go:239]   Oct 25 22:00:52 stopped-upgrade-634233 kubelet[9704]: E1025 22:00:52.826034    9704 pod_workers.go:191] Error syncing pod 603b914543a305bf066dc8de01ce2232 ("kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"
	  Oct 25 22:00:52 stopped-upgrade-634233 kubelet[9704]: E1025 22:00:52.826034    9704 pod_workers.go:191] Error syncing pod 603b914543a305bf066dc8de01ce2232 ("kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"
	W1025 22:01:12.654852  112102 out.go:239]   Oct 25 22:00:58 stopped-upgrade-634233 kubelet[9704]: E1025 22:00:58.651724    9704 pod_workers.go:191] Error syncing pod 603b914543a305bf066dc8de01ce2232 ("kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"
	  Oct 25 22:00:58 stopped-upgrade-634233 kubelet[9704]: E1025 22:00:58.651724    9704 pod_workers.go:191] Error syncing pod 603b914543a305bf066dc8de01ce2232 ("kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"
	W1025 22:01:12.654864  112102 out.go:239]   Oct 25 22:01:03 stopped-upgrade-634233 kubelet[9704]: E1025 22:01:03.329959    9704 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	  Oct 25 22:01:03 stopped-upgrade-634233 kubelet[9704]: E1025 22:01:03.329959    9704 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	W1025 22:01:12.654876  112102 out.go:239]   Oct 25 22:01:09 stopped-upgrade-634233 kubelet[11558]: E1025 22:01:09.670566   11558 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	  Oct 25 22:01:09 stopped-upgrade-634233 kubelet[11558]: E1025 22:01:09.670566   11558 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	I1025 22:01:12.654890  112102 out.go:309] Setting ErrFile to fd 2...
	I1025 22:01:12.654898  112102 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 22:01:22.655428  112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
	I1025 22:01:22.656090  112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
	I1025 22:01:22.656172  112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 22:01:22.696713  112102 logs.go:284] 1 containers: [cdbdd0260197]
	I1025 22:01:22.696810  112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 22:01:22.738056  112102 logs.go:284] 2 containers: [351d1be3fc41 111a4f5088ac]
	I1025 22:01:22.738139  112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 22:01:22.769052  112102 logs.go:284] 0 containers: []
	W1025 22:01:22.769075  112102 logs.go:286] No container was found matching "coredns"
	I1025 22:01:22.769130  112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 22:01:22.803051  112102 logs.go:284] 2 containers: [8464245274b1 09fabc795729]
	I1025 22:01:22.803125  112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 22:01:22.832579  112102 logs.go:284] 0 containers: []
	W1025 22:01:22.832602  112102 logs.go:286] No container was found matching "kube-proxy"
	I1025 22:01:22.832651  112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 22:01:22.875795  112102 logs.go:284] 2 containers: [8573e3b0daef 53138481ecbd]
	I1025 22:01:22.875897  112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 22:01:22.923740  112102 logs.go:284] 0 containers: []
	W1025 22:01:22.923769  112102 logs.go:286] No container was found matching "kindnet"
	I1025 22:01:22.923785  112102 logs.go:123] Gathering logs for kubelet ...
	I1025 22:01:22.923830  112102 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1025 22:01:22.942540  112102 logs.go:138] Found kubelet problem: Oct 25 22:01:03 stopped-upgrade-634233 kubelet[9704]: E1025 22:01:03.329959    9704 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	W1025 22:01:22.961215  112102 logs.go:138] Found kubelet problem: Oct 25 22:01:09 stopped-upgrade-634233 kubelet[11558]: E1025 22:01:09.670566   11558 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	W1025 22:01:22.971806  112102 logs.go:138] Found kubelet problem: Oct 25 22:01:16 stopped-upgrade-634233 kubelet[11558]: E1025 22:01:16.588843   11558 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	W1025 22:01:22.976940  112102 logs.go:138] Found kubelet problem: Oct 25 22:01:19 stopped-upgrade-634233 kubelet[11558]: E1025 22:01:19.713197   11558 pod_workers.go:191] Error syncing pod 603b914543a305bf066dc8de01ce2232 ("kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"
	W1025 22:01:22.981905  112102 logs.go:138] Found kubelet problem: Oct 25 22:01:22 stopped-upgrade-634233 kubelet[11558]: E1025 22:01:22.906018   11558 pod_workers.go:191] Error syncing pod 603b914543a305bf066dc8de01ce2232 ("kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"
	I1025 22:01:22.981938  112102 logs.go:123] Gathering logs for kube-apiserver [cdbdd0260197] ...
	I1025 22:01:22.981966  112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdbdd0260197"
	I1025 22:01:23.039536  112102 logs.go:123] Gathering logs for etcd [111a4f5088ac] ...
	I1025 22:01:23.039572  112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 111a4f5088ac"
	I1025 22:01:23.076076  112102 logs.go:123] Gathering logs for kube-scheduler [8464245274b1] ...
	I1025 22:01:23.076109  112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8464245274b1"
	I1025 22:01:23.168032  112102 logs.go:123] Gathering logs for kube-controller-manager [53138481ecbd] ...
	I1025 22:01:23.168080  112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53138481ecbd"
	I1025 22:01:23.218392  112102 logs.go:123] Gathering logs for dmesg ...
	I1025 22:01:23.218432  112102 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 22:01:23.229071  112102 logs.go:123] Gathering logs for describe nodes ...
	I1025 22:01:23.229099  112102 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 22:01:23.307057  112102 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	error: tls: private key does not match public key
	 output: 
	** stderr ** 
	error: tls: private key does not match public key
	
	** /stderr **
	I1025 22:01:23.307078  112102 logs.go:123] Gathering logs for etcd [351d1be3fc41] ...
	I1025 22:01:23.307089  112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 351d1be3fc41"
	I1025 22:01:23.344132  112102 logs.go:123] Gathering logs for kube-scheduler [09fabc795729] ...
	I1025 22:01:23.344164  112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09fabc795729"
	I1025 22:01:23.390409  112102 logs.go:123] Gathering logs for kube-controller-manager [8573e3b0daef] ...
	I1025 22:01:23.390443  112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8573e3b0daef"
	I1025 22:01:23.425784  112102 logs.go:123] Gathering logs for Docker ...
	I1025 22:01:23.425815  112102 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 22:01:23.456977  112102 logs.go:123] Gathering logs for container status ...
	I1025 22:01:23.457016  112102 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 22:01:23.478633  112102 out.go:309] Setting ErrFile to fd 2...
	I1025 22:01:23.478666  112102 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1025 22:01:23.478728  112102 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1025 22:01:23.478750  112102 out.go:239]   Oct 25 22:01:03 stopped-upgrade-634233 kubelet[9704]: E1025 22:01:03.329959    9704 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	  Oct 25 22:01:03 stopped-upgrade-634233 kubelet[9704]: E1025 22:01:03.329959    9704 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	W1025 22:01:23.478764  112102 out.go:239]   Oct 25 22:01:09 stopped-upgrade-634233 kubelet[11558]: E1025 22:01:09.670566   11558 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	  Oct 25 22:01:09 stopped-upgrade-634233 kubelet[11558]: E1025 22:01:09.670566   11558 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	W1025 22:01:23.478784  112102 out.go:239]   Oct 25 22:01:16 stopped-upgrade-634233 kubelet[11558]: E1025 22:01:16.588843   11558 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	  Oct 25 22:01:16 stopped-upgrade-634233 kubelet[11558]: E1025 22:01:16.588843   11558 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	W1025 22:01:23.478799  112102 out.go:239]   Oct 25 22:01:19 stopped-upgrade-634233 kubelet[11558]: E1025 22:01:19.713197   11558 pod_workers.go:191] Error syncing pod 603b914543a305bf066dc8de01ce2232 ("kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"
	  Oct 25 22:01:19 stopped-upgrade-634233 kubelet[11558]: E1025 22:01:19.713197   11558 pod_workers.go:191] Error syncing pod 603b914543a305bf066dc8de01ce2232 ("kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"
	W1025 22:01:23.478809  112102 out.go:239]   Oct 25 22:01:22 stopped-upgrade-634233 kubelet[11558]: E1025 22:01:22.906018   11558 pod_workers.go:191] Error syncing pod 603b914543a305bf066dc8de01ce2232 ("kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"
	  Oct 25 22:01:22 stopped-upgrade-634233 kubelet[11558]: E1025 22:01:22.906018   11558 pod_workers.go:191] Error syncing pod 603b914543a305bf066dc8de01ce2232 ("kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"
	I1025 22:01:23.478822  112102 out.go:309] Setting ErrFile to fd 2...
	I1025 22:01:23.478847  112102 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 22:01:33.479544  112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
	I1025 22:01:33.480143  112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
	I1025 22:01:33.480266  112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 22:01:33.515976  112102 logs.go:284] 1 containers: [044bfb6e9ec8]
	I1025 22:01:33.516063  112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 22:01:33.553478  112102 logs.go:284] 2 containers: [351d1be3fc41 111a4f5088ac]
	I1025 22:01:33.553546  112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 22:01:33.588972  112102 logs.go:284] 0 containers: []
	W1025 22:01:33.589000  112102 logs.go:286] No container was found matching "coredns"
	I1025 22:01:33.589061  112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 22:01:33.620791  112102 logs.go:284] 2 containers: [8464245274b1 09fabc795729]
	I1025 22:01:33.620885  112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 22:01:33.651720  112102 logs.go:284] 0 containers: []
	W1025 22:01:33.651746  112102 logs.go:286] No container was found matching "kube-proxy"
	I1025 22:01:33.651806  112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 22:01:33.681915  112102 logs.go:284] 2 containers: [8573e3b0daef 53138481ecbd]
	I1025 22:01:33.682004  112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 22:01:33.711270  112102 logs.go:284] 0 containers: []
	W1025 22:01:33.711294  112102 logs.go:286] No container was found matching "kindnet"
	I1025 22:01:33.711315  112102 logs.go:123] Gathering logs for etcd [351d1be3fc41] ...
	I1025 22:01:33.711330  112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 351d1be3fc41"
	I1025 22:01:33.756532  112102 logs.go:123] Gathering logs for etcd [111a4f5088ac] ...
	I1025 22:01:33.756572  112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 111a4f5088ac"
	I1025 22:01:33.794191  112102 logs.go:123] Gathering logs for kube-scheduler [8464245274b1] ...
	I1025 22:01:33.794223  112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8464245274b1"
	I1025 22:01:33.883530  112102 logs.go:123] Gathering logs for kube-scheduler [09fabc795729] ...
	I1025 22:01:33.883565  112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09fabc795729"
	I1025 22:01:33.924964  112102 logs.go:123] Gathering logs for container status ...
	I1025 22:01:33.924995  112102 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 22:01:33.948257  112102 logs.go:123] Gathering logs for kubelet ...
	I1025 22:01:33.948297  112102 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1025 22:01:33.970345  112102 logs.go:138] Found kubelet problem: Oct 25 22:01:09 stopped-upgrade-634233 kubelet[11558]: E1025 22:01:09.670566   11558 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	W1025 22:01:33.981122  112102 logs.go:138] Found kubelet problem: Oct 25 22:01:16 stopped-upgrade-634233 kubelet[11558]: E1025 22:01:16.588843   11558 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	W1025 22:01:33.986481  112102 logs.go:138] Found kubelet problem: Oct 25 22:01:19 stopped-upgrade-634233 kubelet[11558]: E1025 22:01:19.713197   11558 pod_workers.go:191] Error syncing pod 603b914543a305bf066dc8de01ce2232 ("kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"
	W1025 22:01:33.991378  112102 logs.go:138] Found kubelet problem: Oct 25 22:01:22 stopped-upgrade-634233 kubelet[11558]: E1025 22:01:22.906018   11558 pod_workers.go:191] Error syncing pod 603b914543a305bf066dc8de01ce2232 ("kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"
	W1025 22:01:34.002630  112102 logs.go:138] Found kubelet problem: Oct 25 22:01:29 stopped-upgrade-634233 kubelet[11558]: E1025 22:01:29.795238   11558 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	I1025 22:01:34.009073  112102 logs.go:123] Gathering logs for dmesg ...
	I1025 22:01:34.009095  112102 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 22:01:34.019581  112102 logs.go:123] Gathering logs for describe nodes ...
	I1025 22:01:34.019606  112102 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 22:01:34.089518  112102 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	error: tls: private key does not match public key
	 output: 
	** stderr ** 
	error: tls: private key does not match public key
	
	** /stderr **
	I1025 22:01:34.089589  112102 logs.go:123] Gathering logs for Docker ...
	I1025 22:01:34.089621  112102 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 22:01:34.126587  112102 logs.go:123] Gathering logs for kube-apiserver [044bfb6e9ec8] ...
	I1025 22:01:34.126619  112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 044bfb6e9ec8"
	I1025 22:01:34.191804  112102 logs.go:123] Gathering logs for kube-controller-manager [8573e3b0daef] ...
	I1025 22:01:34.191837  112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8573e3b0daef"
	I1025 22:01:34.230849  112102 logs.go:123] Gathering logs for kube-controller-manager [53138481ecbd] ...
	I1025 22:01:34.230879  112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53138481ecbd"
	I1025 22:01:34.275305  112102 out.go:309] Setting ErrFile to fd 2...
	I1025 22:01:34.275334  112102 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1025 22:01:34.275388  112102 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1025 22:01:34.275399  112102 out.go:239]   Oct 25 22:01:09 stopped-upgrade-634233 kubelet[11558]: E1025 22:01:09.670566   11558 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	  Oct 25 22:01:09 stopped-upgrade-634233 kubelet[11558]: E1025 22:01:09.670566   11558 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	W1025 22:01:34.275406  112102 out.go:239]   Oct 25 22:01:16 stopped-upgrade-634233 kubelet[11558]: E1025 22:01:16.588843   11558 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	  Oct 25 22:01:16 stopped-upgrade-634233 kubelet[11558]: E1025 22:01:16.588843   11558 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	W1025 22:01:34.275420  112102 out.go:239]   Oct 25 22:01:19 stopped-upgrade-634233 kubelet[11558]: E1025 22:01:19.713197   11558 pod_workers.go:191] Error syncing pod 603b914543a305bf066dc8de01ce2232 ("kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"
	  Oct 25 22:01:19 stopped-upgrade-634233 kubelet[11558]: E1025 22:01:19.713197   11558 pod_workers.go:191] Error syncing pod 603b914543a305bf066dc8de01ce2232 ("kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"
	W1025 22:01:34.275434  112102 out.go:239]   Oct 25 22:01:22 stopped-upgrade-634233 kubelet[11558]: E1025 22:01:22.906018   11558 pod_workers.go:191] Error syncing pod 603b914543a305bf066dc8de01ce2232 ("kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"
	  Oct 25 22:01:22 stopped-upgrade-634233 kubelet[11558]: E1025 22:01:22.906018   11558 pod_workers.go:191] Error syncing pod 603b914543a305bf066dc8de01ce2232 ("kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"
	W1025 22:01:34.275446  112102 out.go:239]   Oct 25 22:01:29 stopped-upgrade-634233 kubelet[11558]: E1025 22:01:29.795238   11558 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	  Oct 25 22:01:29 stopped-upgrade-634233 kubelet[11558]: E1025 22:01:29.795238   11558 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	I1025 22:01:34.275456  112102 out.go:309] Setting ErrFile to fd 2...
	I1025 22:01:34.275463  112102 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 22:01:44.277061  112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
	I1025 22:01:44.277743  112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
	I1025 22:01:44.277849  112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 22:01:44.318353  112102 logs.go:284] 1 containers: [044bfb6e9ec8]
	I1025 22:01:44.318452  112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 22:01:44.357199  112102 logs.go:284] 2 containers: [351d1be3fc41 111a4f5088ac]
	I1025 22:01:44.357290  112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 22:01:44.397263  112102 logs.go:284] 0 containers: []
	W1025 22:01:44.397291  112102 logs.go:286] No container was found matching "coredns"
	I1025 22:01:44.397355  112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 22:01:44.445538  112102 logs.go:284] 2 containers: [8464245274b1 09fabc795729]
	I1025 22:01:44.445626  112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 22:01:44.482527  112102 logs.go:284] 0 containers: []
	W1025 22:01:44.482560  112102 logs.go:286] No container was found matching "kube-proxy"
	I1025 22:01:44.482619  112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 22:01:44.519526  112102 logs.go:284] 3 containers: [56aa01cc7db9 8573e3b0daef 53138481ecbd]
	I1025 22:01:44.519629  112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 22:01:44.558628  112102 logs.go:284] 0 containers: []
	W1025 22:01:44.558660  112102 logs.go:286] No container was found matching "kindnet"
	I1025 22:01:44.558684  112102 logs.go:123] Gathering logs for kubelet ...
	I1025 22:01:44.558703  112102 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1025 22:01:44.580361  112102 logs.go:138] Found kubelet problem: Oct 25 22:01:19 stopped-upgrade-634233 kubelet[11558]: E1025 22:01:19.713197   11558 pod_workers.go:191] Error syncing pod 603b914543a305bf066dc8de01ce2232 ("kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"
	W1025 22:01:44.585406  112102 logs.go:138] Found kubelet problem: Oct 25 22:01:22 stopped-upgrade-634233 kubelet[11558]: E1025 22:01:22.906018   11558 pod_workers.go:191] Error syncing pod 603b914543a305bf066dc8de01ce2232 ("kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"
	W1025 22:01:44.597052  112102 logs.go:138] Found kubelet problem: Oct 25 22:01:29 stopped-upgrade-634233 kubelet[11558]: E1025 22:01:29.795238   11558 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	W1025 22:01:44.608057  112102 logs.go:138] Found kubelet problem: Oct 25 22:01:36 stopped-upgrade-634233 kubelet[11558]: E1025 22:01:36.595537   11558 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	I1025 22:01:44.620251  112102 logs.go:123] Gathering logs for describe nodes ...
	I1025 22:01:44.620282  112102 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 22:01:44.703606  112102 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	error: tls: private key does not match public key
	 output: 
	** stderr ** 
	error: tls: private key does not match public key
	
	** /stderr **
	I1025 22:01:44.703633  112102 logs.go:123] Gathering logs for etcd [111a4f5088ac] ...
	I1025 22:01:44.703648  112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 111a4f5088ac"
	I1025 22:01:44.750857  112102 logs.go:123] Gathering logs for kube-controller-manager [56aa01cc7db9] ...
	I1025 22:01:44.750903  112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56aa01cc7db9"
	I1025 22:01:44.791420  112102 logs.go:123] Gathering logs for container status ...
	I1025 22:01:44.791462  112102 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 22:01:44.815681  112102 logs.go:123] Gathering logs for kube-apiserver [044bfb6e9ec8] ...
	I1025 22:01:44.815718  112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 044bfb6e9ec8"
	I1025 22:01:44.875475  112102 logs.go:123] Gathering logs for kube-scheduler [8464245274b1] ...
	I1025 22:01:44.875521  112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8464245274b1"
	I1025 22:01:44.990427  112102 logs.go:123] Gathering logs for kube-scheduler [09fabc795729] ...
	I1025 22:01:44.990563  112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09fabc795729"
	I1025 22:01:45.037024  112102 logs.go:123] Gathering logs for kube-controller-manager [53138481ecbd] ...
	I1025 22:01:45.037060  112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53138481ecbd"
	I1025 22:01:45.079783  112102 logs.go:123] Gathering logs for Docker ...
	I1025 22:01:45.079819  112102 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 22:01:45.125339  112102 logs.go:123] Gathering logs for dmesg ...
	I1025 22:01:45.125385  112102 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 22:01:45.136253  112102 logs.go:123] Gathering logs for etcd [351d1be3fc41] ...
	I1025 22:01:45.136288  112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 351d1be3fc41"
	I1025 22:01:45.183501  112102 logs.go:123] Gathering logs for kube-controller-manager [8573e3b0daef] ...
	I1025 22:01:45.183534  112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8573e3b0daef"
	I1025 22:01:45.225415  112102 out.go:309] Setting ErrFile to fd 2...
	I1025 22:01:45.225452  112102 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1025 22:01:45.225524  112102 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1025 22:01:45.225540  112102 out.go:239]   Oct 25 22:01:19 stopped-upgrade-634233 kubelet[11558]: E1025 22:01:19.713197   11558 pod_workers.go:191] Error syncing pod 603b914543a305bf066dc8de01ce2232 ("kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"
	  Oct 25 22:01:19 stopped-upgrade-634233 kubelet[11558]: E1025 22:01:19.713197   11558 pod_workers.go:191] Error syncing pod 603b914543a305bf066dc8de01ce2232 ("kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"
	W1025 22:01:45.225555  112102 out.go:239]   Oct 25 22:01:22 stopped-upgrade-634233 kubelet[11558]: E1025 22:01:22.906018   11558 pod_workers.go:191] Error syncing pod 603b914543a305bf066dc8de01ce2232 ("kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"
	  Oct 25 22:01:22 stopped-upgrade-634233 kubelet[11558]: E1025 22:01:22.906018   11558 pod_workers.go:191] Error syncing pod 603b914543a305bf066dc8de01ce2232 ("kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"
	W1025 22:01:45.225568  112102 out.go:239]   Oct 25 22:01:29 stopped-upgrade-634233 kubelet[11558]: E1025 22:01:29.795238   11558 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	  Oct 25 22:01:29 stopped-upgrade-634233 kubelet[11558]: E1025 22:01:29.795238   11558 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	W1025 22:01:45.225580  112102 out.go:239]   Oct 25 22:01:36 stopped-upgrade-634233 kubelet[11558]: E1025 22:01:36.595537   11558 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	  Oct 25 22:01:36 stopped-upgrade-634233 kubelet[11558]: E1025 22:01:36.595537   11558 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	I1025 22:01:45.225592  112102 out.go:309] Setting ErrFile to fd 2...
	I1025 22:01:45.225604  112102 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 22:01:55.226039  112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
	I1025 22:01:55.226699  112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
	I1025 22:01:55.226782  112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 22:01:55.265097  112102 logs.go:284] 1 containers: [bb666bf92cd4]
	I1025 22:01:55.265207  112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 22:01:55.305624  112102 logs.go:284] 2 containers: [351d1be3fc41 111a4f5088ac]
	I1025 22:01:55.305766  112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 22:01:55.345165  112102 logs.go:284] 0 containers: []
	W1025 22:01:55.345194  112102 logs.go:286] No container was found matching "coredns"
	I1025 22:01:55.345246  112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 22:01:55.376667  112102 logs.go:284] 2 containers: [8464245274b1 09fabc795729]
	I1025 22:01:55.376753  112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 22:01:55.413204  112102 logs.go:284] 0 containers: []
	W1025 22:01:55.413231  112102 logs.go:286] No container was found matching "kube-proxy"
	I1025 22:01:55.413290  112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 22:01:55.448549  112102 logs.go:284] 3 containers: [9b087fb968e7 56aa01cc7db9 53138481ecbd]
	I1025 22:01:55.448652  112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 22:01:55.483400  112102 logs.go:284] 0 containers: []
	W1025 22:01:55.483432  112102 logs.go:286] No container was found matching "kindnet"
	I1025 22:01:55.483446  112102 logs.go:123] Gathering logs for kubelet ...
	I1025 22:01:55.483458  112102 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1025 22:01:55.507983  112102 logs.go:138] Found kubelet problem: Oct 25 22:01:36 stopped-upgrade-634233 kubelet[11558]: E1025 22:01:36.595537   11558 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	W1025 22:01:55.539182  112102 logs.go:138] Found kubelet problem: Oct 25 22:01:49 stopped-upgrade-634233 kubelet[13366]: E1025 22:01:49.000691   13366 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	W1025 22:01:55.540786  112102 logs.go:138] Found kubelet problem: Oct 25 22:01:49 stopped-upgrade-634233 kubelet[13366]: E1025 22:01:49.948569   13366 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	W1025 22:01:55.549420  112102 logs.go:138] Found kubelet problem: Oct 25 22:01:55 stopped-upgrade-634233 kubelet[13366]: E1025 22:01:55.321003   13366 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	I1025 22:01:55.549780  112102 logs.go:123] Gathering logs for kube-apiserver [bb666bf92cd4] ...
	I1025 22:01:55.549801  112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb666bf92cd4"
	I1025 22:01:55.626600  112102 logs.go:123] Gathering logs for kube-controller-manager [53138481ecbd] ...
	I1025 22:01:55.626636  112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53138481ecbd"
	I1025 22:01:55.679910  112102 logs.go:123] Gathering logs for Docker ...
	I1025 22:01:55.679942  112102 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 22:01:55.716288  112102 logs.go:123] Gathering logs for dmesg ...
	I1025 22:01:55.716321  112102 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 22:01:55.728024  112102 logs.go:123] Gathering logs for describe nodes ...
	I1025 22:01:55.728053  112102 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 22:01:55.798910  112102 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	error: tls: private key does not match public key
	 output: 
	** stderr ** 
	error: tls: private key does not match public key
	
	** /stderr **
	I1025 22:01:55.798934  112102 logs.go:123] Gathering logs for kube-scheduler [09fabc795729] ...
	I1025 22:01:55.798948  112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09fabc795729"
	I1025 22:01:55.849112  112102 logs.go:123] Gathering logs for kube-controller-manager [56aa01cc7db9] ...
	I1025 22:01:55.849152  112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56aa01cc7db9"
	I1025 22:01:55.897138  112102 logs.go:123] Gathering logs for container status ...
	I1025 22:01:55.897166  112102 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 22:01:55.921607  112102 logs.go:123] Gathering logs for etcd [351d1be3fc41] ...
	I1025 22:01:55.921634  112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 351d1be3fc41"
	I1025 22:01:55.973348  112102 logs.go:123] Gathering logs for etcd [111a4f5088ac] ...
	I1025 22:01:55.973381  112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 111a4f5088ac"
	I1025 22:01:56.019485  112102 logs.go:123] Gathering logs for kube-scheduler [8464245274b1] ...
	I1025 22:01:56.019521  112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8464245274b1"
	I1025 22:01:56.136053  112102 logs.go:123] Gathering logs for kube-controller-manager [9b087fb968e7] ...
	I1025 22:01:56.136093  112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b087fb968e7"
	I1025 22:01:56.179509  112102 out.go:309] Setting ErrFile to fd 2...
	I1025 22:01:56.179537  112102 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1025 22:01:56.179600  112102 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1025 22:01:56.179615  112102 out.go:239]   Oct 25 22:01:36 stopped-upgrade-634233 kubelet[11558]: E1025 22:01:36.595537   11558 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	  Oct 25 22:01:36 stopped-upgrade-634233 kubelet[11558]: E1025 22:01:36.595537   11558 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	W1025 22:01:56.179627  112102 out.go:239]   Oct 25 22:01:49 stopped-upgrade-634233 kubelet[13366]: E1025 22:01:49.000691   13366 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	  Oct 25 22:01:49 stopped-upgrade-634233 kubelet[13366]: E1025 22:01:49.000691   13366 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	W1025 22:01:56.179640  112102 out.go:239]   Oct 25 22:01:49 stopped-upgrade-634233 kubelet[13366]: E1025 22:01:49.948569   13366 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	  Oct 25 22:01:49 stopped-upgrade-634233 kubelet[13366]: E1025 22:01:49.948569   13366 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	W1025 22:01:56.179653  112102 out.go:239]   Oct 25 22:01:55 stopped-upgrade-634233 kubelet[13366]: E1025 22:01:55.321003   13366 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	  Oct 25 22:01:55 stopped-upgrade-634233 kubelet[13366]: E1025 22:01:55.321003   13366 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	I1025 22:01:56.179663  112102 out.go:309] Setting ErrFile to fd 2...
	I1025 22:01:56.179668  112102 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 22:02:06.181426  112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
	I1025 22:02:06.182140  112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
	I1025 22:02:06.182236  112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 22:02:06.219515  112102 logs.go:284] 1 containers: [bb666bf92cd4]
	I1025 22:02:06.219585  112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 22:02:06.254804  112102 logs.go:284] 2 containers: [351d1be3fc41 111a4f5088ac]
	I1025 22:02:06.254900  112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 22:02:06.300067  112102 logs.go:284] 0 containers: []
	W1025 22:02:06.300098  112102 logs.go:286] No container was found matching "coredns"
	I1025 22:02:06.300163  112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 22:02:06.334063  112102 logs.go:284] 2 containers: [8464245274b1 09fabc795729]
	I1025 22:02:06.334142  112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 22:02:06.366596  112102 logs.go:284] 0 containers: []
	W1025 22:02:06.366620  112102 logs.go:286] No container was found matching "kube-proxy"
	I1025 22:02:06.366677  112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 22:02:06.405517  112102 logs.go:284] 2 containers: [9b087fb968e7 53138481ecbd]
	I1025 22:02:06.405603  112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 22:02:06.440097  112102 logs.go:284] 0 containers: []
	W1025 22:02:06.440121  112102 logs.go:286] No container was found matching "kindnet"
	I1025 22:02:06.440135  112102 logs.go:123] Gathering logs for kubelet ...
	I1025 22:02:06.440148  112102 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1025 22:02:06.469811  112102 logs.go:138] Found kubelet problem: Oct 25 22:01:49 stopped-upgrade-634233 kubelet[13366]: E1025 22:01:49.000691   13366 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	W1025 22:02:06.471374  112102 logs.go:138] Found kubelet problem: Oct 25 22:01:49 stopped-upgrade-634233 kubelet[13366]: E1025 22:01:49.948569   13366 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	W1025 22:02:06.479721  112102 logs.go:138] Found kubelet problem: Oct 25 22:01:55 stopped-upgrade-634233 kubelet[13366]: E1025 22:01:55.321003   13366 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	W1025 22:02:06.487512  112102 logs.go:138] Found kubelet problem: Oct 25 22:02:00 stopped-upgrade-634233 kubelet[13366]: E1025 22:02:00.010327   13366 pod_workers.go:191] Error syncing pod 603b914543a305bf066dc8de01ce2232 ("kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"
	W1025 22:02:06.490473  112102 logs.go:138] Found kubelet problem: Oct 25 22:02:02 stopped-upgrade-634233 kubelet[13366]: E1025 22:02:02.089671   13366 pod_workers.go:191] Error syncing pod 603b914543a305bf066dc8de01ce2232 ("kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"
	I1025 22:02:06.497508  112102 logs.go:123] Gathering logs for kube-apiserver [bb666bf92cd4] ...
	I1025 22:02:06.497533  112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb666bf92cd4"
	I1025 22:02:06.556489  112102 logs.go:123] Gathering logs for kube-controller-manager [53138481ecbd] ...
	I1025 22:02:06.556522  112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53138481ecbd"
	I1025 22:02:06.612058  112102 logs.go:123] Gathering logs for kube-scheduler [09fabc795729] ...
	I1025 22:02:06.612088  112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09fabc795729"
	I1025 22:02:06.664901  112102 logs.go:123] Gathering logs for kube-controller-manager [9b087fb968e7] ...
	I1025 22:02:06.664936  112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b087fb968e7"
	I1025 22:02:06.712162  112102 logs.go:123] Gathering logs for Docker ...
	I1025 22:02:06.712198  112102 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 22:02:06.760167  112102 logs.go:123] Gathering logs for dmesg ...
	I1025 22:02:06.760205  112102 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 22:02:06.772085  112102 logs.go:123] Gathering logs for describe nodes ...
	I1025 22:02:06.772114  112102 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 22:02:06.842049  112102 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	error: tls: private key does not match public key
	 output: 
	** stderr ** 
	error: tls: private key does not match public key
	
	** /stderr **
	I1025 22:02:06.842077  112102 logs.go:123] Gathering logs for etcd [351d1be3fc41] ...
	I1025 22:02:06.842092  112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 351d1be3fc41"
	I1025 22:02:06.881853  112102 logs.go:123] Gathering logs for etcd [111a4f5088ac] ...
	I1025 22:02:06.881884  112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 111a4f5088ac"
	I1025 22:02:06.925916  112102 logs.go:123] Gathering logs for kube-scheduler [8464245274b1] ...
	I1025 22:02:06.925956  112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8464245274b1"
	I1025 22:02:07.036722  112102 logs.go:123] Gathering logs for container status ...
	I1025 22:02:07.036761  112102 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 22:02:07.065295  112102 out.go:309] Setting ErrFile to fd 2...
	I1025 22:02:07.065322  112102 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1025 22:02:07.065378  112102 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1025 22:02:07.065396  112102 out.go:239]   Oct 25 22:01:49 stopped-upgrade-634233 kubelet[13366]: E1025 22:01:49.000691   13366 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	  Oct 25 22:01:49 stopped-upgrade-634233 kubelet[13366]: E1025 22:01:49.000691   13366 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	W1025 22:02:07.065410  112102 out.go:239]   Oct 25 22:01:49 stopped-upgrade-634233 kubelet[13366]: E1025 22:01:49.948569   13366 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	  Oct 25 22:01:49 stopped-upgrade-634233 kubelet[13366]: E1025 22:01:49.948569   13366 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	W1025 22:02:07.065421  112102 out.go:239]   Oct 25 22:01:55 stopped-upgrade-634233 kubelet[13366]: E1025 22:01:55.321003   13366 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	  Oct 25 22:01:55 stopped-upgrade-634233 kubelet[13366]: E1025 22:01:55.321003   13366 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	W1025 22:02:07.065433  112102 out.go:239]   Oct 25 22:02:00 stopped-upgrade-634233 kubelet[13366]: E1025 22:02:00.010327   13366 pod_workers.go:191] Error syncing pod 603b914543a305bf066dc8de01ce2232 ("kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"
	  Oct 25 22:02:00 stopped-upgrade-634233 kubelet[13366]: E1025 22:02:00.010327   13366 pod_workers.go:191] Error syncing pod 603b914543a305bf066dc8de01ce2232 ("kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"
	W1025 22:02:07.065443  112102 out.go:239]   Oct 25 22:02:02 stopped-upgrade-634233 kubelet[13366]: E1025 22:02:02.089671   13366 pod_workers.go:191] Error syncing pod 603b914543a305bf066dc8de01ce2232 ("kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"
	  Oct 25 22:02:02 stopped-upgrade-634233 kubelet[13366]: E1025 22:02:02.089671   13366 pod_workers.go:191] Error syncing pod 603b914543a305bf066dc8de01ce2232 ("kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"
	I1025 22:02:07.065457  112102 out.go:309] Setting ErrFile to fd 2...
	I1025 22:02:07.065464  112102 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 22:02:17.066237  112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
	I1025 22:02:17.066866  112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
	I1025 22:02:17.066969  112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 22:02:17.103308  112102 logs.go:284] 1 containers: [d5226f967430]
	I1025 22:02:17.103379  112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 22:02:17.143531  112102 logs.go:284] 2 containers: [351d1be3fc41 111a4f5088ac]
	I1025 22:02:17.143611  112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 22:02:17.176121  112102 logs.go:284] 0 containers: []
	W1025 22:02:17.176151  112102 logs.go:286] No container was found matching "coredns"
	I1025 22:02:17.176210  112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 22:02:17.208049  112102 logs.go:284] 2 containers: [8464245274b1 09fabc795729]
	I1025 22:02:17.208120  112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 22:02:17.241165  112102 logs.go:284] 0 containers: []
	W1025 22:02:17.241188  112102 logs.go:286] No container was found matching "kube-proxy"
	I1025 22:02:17.241245  112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 22:02:17.273320  112102 logs.go:284] 3 containers: [3aa1487697c9 9b087fb968e7 53138481ecbd]
	I1025 22:02:17.273412  112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 22:02:17.304394  112102 logs.go:284] 0 containers: []
	W1025 22:02:17.304424  112102 logs.go:286] No container was found matching "kindnet"
	I1025 22:02:17.304438  112102 logs.go:123] Gathering logs for etcd [351d1be3fc41] ...
	I1025 22:02:17.304459  112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 351d1be3fc41"
	I1025 22:02:17.346026  112102 logs.go:123] Gathering logs for etcd [111a4f5088ac] ...
	I1025 22:02:17.346056  112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 111a4f5088ac"
	I1025 22:02:17.395824  112102 logs.go:123] Gathering logs for kube-controller-manager [3aa1487697c9] ...
	I1025 22:02:17.395854  112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3aa1487697c9"
	I1025 22:02:17.433162  112102 logs.go:123] Gathering logs for kube-controller-manager [53138481ecbd] ...
	I1025 22:02:17.433189  112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53138481ecbd"
	I1025 22:02:17.487559  112102 logs.go:123] Gathering logs for dmesg ...
	I1025 22:02:17.487588  112102 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 22:02:17.496717  112102 logs.go:123] Gathering logs for describe nodes ...
	I1025 22:02:17.496746  112102 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 22:02:17.572572  112102 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	error: tls: private key does not match public key
	 output: 
	** stderr ** 
	error: tls: private key does not match public key
	
	** /stderr **
	I1025 22:02:17.572601  112102 logs.go:123] Gathering logs for kube-apiserver [d5226f967430] ...
	I1025 22:02:17.572619  112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5226f967430"
	I1025 22:02:17.635850  112102 logs.go:123] Gathering logs for Docker ...
	I1025 22:02:17.635880  112102 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 22:02:17.686147  112102 logs.go:123] Gathering logs for kubelet ...
	I1025 22:02:17.686191  112102 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1025 22:02:17.711837  112102 logs.go:138] Found kubelet problem: Oct 25 22:01:55 stopped-upgrade-634233 kubelet[13366]: E1025 22:01:55.321003   13366 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	W1025 22:02:17.720308  112102 logs.go:138] Found kubelet problem: Oct 25 22:02:00 stopped-upgrade-634233 kubelet[13366]: E1025 22:02:00.010327   13366 pod_workers.go:191] Error syncing pod 603b914543a305bf066dc8de01ce2232 ("kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"
	W1025 22:02:17.723325  112102 logs.go:138] Found kubelet problem: Oct 25 22:02:02 stopped-upgrade-634233 kubelet[13366]: E1025 22:02:02.089671   13366 pod_workers.go:191] Error syncing pod 603b914543a305bf066dc8de01ce2232 ("kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"
	W1025 22:02:17.740341  112102 logs.go:138] Found kubelet problem: Oct 25 22:02:11 stopped-upgrade-634233 kubelet[13366]: E1025 22:02:11.134483   13366 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	W1025 22:02:17.748070  112102 logs.go:138] Found kubelet problem: Oct 25 22:02:15 stopped-upgrade-634233 kubelet[13366]: E1025 22:02:15.318641   13366 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	I1025 22:02:17.751751  112102 logs.go:123] Gathering logs for kube-scheduler [09fabc795729] ...
	I1025 22:02:17.751778  112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09fabc795729"
	I1025 22:02:17.791355  112102 logs.go:123] Gathering logs for kube-controller-manager [9b087fb968e7] ...
	I1025 22:02:17.791387  112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9b087fb968e7"
	I1025 22:02:17.832757  112102 logs.go:123] Gathering logs for kube-scheduler [8464245274b1] ...
	I1025 22:02:17.832783  112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8464245274b1"
	I1025 22:02:17.928565  112102 logs.go:123] Gathering logs for container status ...
	I1025 22:02:17.928602  112102 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 22:02:17.954915  112102 out.go:309] Setting ErrFile to fd 2...
	I1025 22:02:17.954951  112102 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1025 22:02:17.955055  112102 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1025 22:02:17.955074  112102 out.go:239]   Oct 25 22:01:55 stopped-upgrade-634233 kubelet[13366]: E1025 22:01:55.321003   13366 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	  Oct 25 22:01:55 stopped-upgrade-634233 kubelet[13366]: E1025 22:01:55.321003   13366 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	W1025 22:02:17.955089  112102 out.go:239]   Oct 25 22:02:00 stopped-upgrade-634233 kubelet[13366]: E1025 22:02:00.010327   13366 pod_workers.go:191] Error syncing pod 603b914543a305bf066dc8de01ce2232 ("kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"
	  Oct 25 22:02:00 stopped-upgrade-634233 kubelet[13366]: E1025 22:02:00.010327   13366 pod_workers.go:191] Error syncing pod 603b914543a305bf066dc8de01ce2232 ("kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"
	W1025 22:02:17.955100  112102 out.go:239]   Oct 25 22:02:02 stopped-upgrade-634233 kubelet[13366]: E1025 22:02:02.089671   13366 pod_workers.go:191] Error syncing pod 603b914543a305bf066dc8de01ce2232 ("kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"
	  Oct 25 22:02:02 stopped-upgrade-634233 kubelet[13366]: E1025 22:02:02.089671   13366 pod_workers.go:191] Error syncing pod 603b914543a305bf066dc8de01ce2232 ("kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-stopped-upgrade-634233_kube-system(603b914543a305bf066dc8de01ce2232)"
	W1025 22:02:17.955109  112102 out.go:239]   Oct 25 22:02:11 stopped-upgrade-634233 kubelet[13366]: E1025 22:02:11.134483   13366 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	  Oct 25 22:02:11 stopped-upgrade-634233 kubelet[13366]: E1025 22:02:11.134483   13366 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	W1025 22:02:17.955119  112102 out.go:239]   Oct 25 22:02:15 stopped-upgrade-634233 kubelet[13366]: E1025 22:02:15.318641   13366 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	  Oct 25 22:02:15 stopped-upgrade-634233 kubelet[13366]: E1025 22:02:15.318641   13366 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	I1025 22:02:17.955128  112102 out.go:309] Setting ErrFile to fd 2...
	I1025 22:02:17.955143  112102 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 22:02:27.956009  112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
	I1025 22:02:27.956817  112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
	I1025 22:02:27.956906  112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 22:02:28.001663  112102 logs.go:284] 2 containers: [ef3e9f6dc565 d5226f967430]
	I1025 22:02:28.001759  112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 22:02:28.046954  112102 logs.go:284] 2 containers: [351d1be3fc41 111a4f5088ac]
	I1025 22:02:28.047051  112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 22:02:28.086101  112102 logs.go:284] 0 containers: []
	W1025 22:02:28.086136  112102 logs.go:286] No container was found matching "coredns"
	I1025 22:02:28.086204  112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 22:02:28.127303  112102 logs.go:284] 2 containers: [8464245274b1 09fabc795729]
	I1025 22:02:28.127387  112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 22:02:28.160380  112102 logs.go:284] 0 containers: []
	W1025 22:02:28.160405  112102 logs.go:286] No container was found matching "kube-proxy"
	I1025 22:02:28.160474  112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 22:02:28.192810  112102 logs.go:284] 3 containers: [040aa54dc9a4 3aa1487697c9 53138481ecbd]
	I1025 22:02:28.192885  112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 22:02:28.228839  112102 logs.go:284] 0 containers: []
	W1025 22:02:28.228875  112102 logs.go:286] No container was found matching "kindnet"
	I1025 22:02:28.228900  112102 logs.go:123] Gathering logs for describe nodes ...
	I1025 22:02:28.228928  112102 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 22:02:28.301613  112102 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	error: tls: private key does not match public key
	 output: 
	** stderr ** 
	error: tls: private key does not match public key
	
	** /stderr **
	I1025 22:02:28.301641  112102 logs.go:123] Gathering logs for kube-controller-manager [040aa54dc9a4] ...
	I1025 22:02:28.301657  112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 040aa54dc9a4"
	I1025 22:02:29.486716  112102 ssh_runner.go:235] Completed: /bin/bash -c "docker logs --tail 400 040aa54dc9a4": (1.185027554s)
	I1025 22:02:29.486766  112102 logs.go:123] Gathering logs for kube-apiserver [d5226f967430] ...
	I1025 22:02:29.486781  112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d5226f967430"
	I1025 22:02:29.576744  112102 logs.go:123] Gathering logs for kubelet ...
	I1025 22:02:29.576793  112102 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1025 22:02:29.615552  112102 logs.go:138] Found kubelet problem: Oct 25 22:02:11 stopped-upgrade-634233 kubelet[13366]: E1025 22:02:11.134483   13366 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	W1025 22:02:29.627911  112102 logs.go:138] Found kubelet problem: Oct 25 22:02:15 stopped-upgrade-634233 kubelet[13366]: E1025 22:02:15.318641   13366 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	I1025 22:02:29.673769  112102 logs.go:123] Gathering logs for kube-scheduler [8464245274b1] ...
	I1025 22:02:29.673809  112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8464245274b1"
	I1025 22:02:29.819358  112102 logs.go:123] Gathering logs for kube-controller-manager [3aa1487697c9] ...
	I1025 22:02:29.819402  112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3aa1487697c9"
	I1025 22:02:29.877727  112102 logs.go:123] Gathering logs for kube-controller-manager [53138481ecbd] ...
	I1025 22:02:29.877768  112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 53138481ecbd"
	I1025 22:02:29.946841  112102 logs.go:123] Gathering logs for Docker ...
	I1025 22:02:29.946881  112102 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 22:02:29.991082  112102 logs.go:123] Gathering logs for container status ...
	I1025 22:02:29.991118  112102 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 22:02:30.020196  112102 logs.go:123] Gathering logs for dmesg ...
	I1025 22:02:30.020253  112102 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 22:02:30.034535  112102 logs.go:123] Gathering logs for kube-apiserver [ef3e9f6dc565] ...
	I1025 22:02:30.034574  112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef3e9f6dc565"
	I1025 22:02:30.132820  112102 logs.go:123] Gathering logs for etcd [351d1be3fc41] ...
	I1025 22:02:30.132864  112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 351d1be3fc41"
	I1025 22:02:30.199388  112102 logs.go:123] Gathering logs for etcd [111a4f5088ac] ...
	I1025 22:02:30.199427  112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 111a4f5088ac"
	I1025 22:02:30.264634  112102 logs.go:123] Gathering logs for kube-scheduler [09fabc795729] ...
	I1025 22:02:30.264673  112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 09fabc795729"
	I1025 22:02:30.333043  112102 out.go:309] Setting ErrFile to fd 2...
	I1025 22:02:30.333080  112102 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1025 22:02:30.333157  112102 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1025 22:02:30.333171  112102 out.go:239]   Oct 25 22:02:11 stopped-upgrade-634233 kubelet[13366]: E1025 22:02:11.134483   13366 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	  Oct 25 22:02:11 stopped-upgrade-634233 kubelet[13366]: E1025 22:02:11.134483   13366 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	W1025 22:02:30.333182  112102 out.go:239]   Oct 25 22:02:15 stopped-upgrade-634233 kubelet[13366]: E1025 22:02:15.318641   13366 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	  Oct 25 22:02:15 stopped-upgrade-634233 kubelet[13366]: E1025 22:02:15.318641   13366 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	I1025 22:02:30.333194  112102 out.go:309] Setting ErrFile to fd 2...
	I1025 22:02:30.333202  112102 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 22:02:40.334583  112102 api_server.go:253] Checking apiserver healthz at https://192.168.50.236:8443/healthz ...
	I1025 22:02:40.335196  112102 api_server.go:269] stopped: https://192.168.50.236:8443/healthz: Get "https://192.168.50.236:8443/healthz": dial tcp 192.168.50.236:8443: connect: connection refused
	I1025 22:02:40.335280  112102 kubeadm.go:640] restartCluster took 4m48.89592098s
	W1025 22:02:40.335349  112102 out.go:239] ! Unable to restart cluster, will reset it: apiserver health: apiserver healthz never reported healthy: context deadline exceeded
	! Unable to restart cluster, will reset it: apiserver health: apiserver healthz never reported healthy: context deadline exceeded
	I1025 22:02:40.335381  112102 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I1025 22:02:42.595221  112102 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force": (2.259817104s)
	I1025 22:02:42.595290  112102 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 22:02:42.604693  112102 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1025 22:02:42.610588  112102 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1025 22:02:42.617496  112102 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1025 22:02:42.617544  112102 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I1025 22:02:42.675319  112102 kubeadm.go:322] [init] Using Kubernetes version: v1.17.0
	I1025 22:02:42.675616  112102 kubeadm.go:322] [preflight] Running pre-flight checks
	I1025 22:02:42.938560  112102 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1025 22:02:42.938740  112102 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1025 22:02:42.938878  112102 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1025 22:02:43.279301  112102 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1025 22:02:43.279475  112102 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1025 22:02:43.279531  112102 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1025 22:02:43.372141  112102 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1025 22:02:43.374166  112102 out.go:204]   - Generating certificates and keys ...
	I1025 22:02:43.374288  112102 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1025 22:02:43.374378  112102 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1025 22:02:43.374502  112102 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1025 22:02:43.374639  112102 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1025 22:02:43.374741  112102 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1025 22:02:43.374827  112102 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1025 22:02:43.374919  112102 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1025 22:02:43.375010  112102 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1025 22:02:43.375423  112102 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1025 22:02:43.376113  112102 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1025 22:02:43.376290  112102 kubeadm.go:322] [certs] Using the existing "sa" key
	I1025 22:02:43.376489  112102 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1025 22:02:43.517496  112102 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1025 22:02:43.670859  112102 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1025 22:02:43.917905  112102 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1025 22:02:44.164406  112102 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1025 22:02:44.165797  112102 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1025 22:02:44.167643  112102 out.go:204]   - Booting up control plane ...
	I1025 22:02:44.167786  112102 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1025 22:02:44.198124  112102 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1025 22:02:44.203077  112102 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1025 22:02:44.207416  112102 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1025 22:02:44.227131  112102 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1025 22:03:24.229071  112102 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I1025 22:06:44.230969  112102 kubeadm.go:322] 
	I1025 22:06:44.231100  112102 kubeadm.go:322] Unfortunately, an error has occurred:
	I1025 22:06:44.231263  112102 kubeadm.go:322] 	timed out waiting for the condition
	I1025 22:06:44.231275  112102 kubeadm.go:322] 
	I1025 22:06:44.231315  112102 kubeadm.go:322] This error is likely caused by:
	I1025 22:06:44.231381  112102 kubeadm.go:322] 	- The kubelet is not running
	I1025 22:06:44.231533  112102 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1025 22:06:44.231558  112102 kubeadm.go:322] 
	I1025 22:06:44.231745  112102 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1025 22:06:44.231816  112102 kubeadm.go:322] 	- 'systemctl status kubelet'
	I1025 22:06:44.231875  112102 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I1025 22:06:44.231884  112102 kubeadm.go:322] 
	I1025 22:06:44.232045  112102 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1025 22:06:44.232186  112102 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I1025 22:06:44.232328  112102 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I1025 22:06:44.232404  112102 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I1025 22:06:44.232549  112102 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I1025 22:06:44.232595  112102 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I1025 22:06:44.233586  112102 kubeadm.go:322] W1025 22:02:42.671272   16516 validation.go:28] Cannot validate kube-proxy config - no validator is available
	I1025 22:06:44.233768  112102 kubeadm.go:322] W1025 22:02:42.671533   16516 validation.go:28] Cannot validate kubelet config - no validator is available
	I1025 22:06:44.234054  112102 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I1025 22:06:44.234202  112102 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1025 22:06:44.234373  112102 kubeadm.go:322] W1025 22:02:44.194971   16516 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1025 22:06:44.234538  112102 kubeadm.go:322] W1025 22:02:44.199953   16516 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1025 22:06:44.234661  112102 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1025 22:06:44.234764  112102 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	W1025 22:06:44.234939  112102 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.17.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
	W1025 22:02:42.671272   16516 validation.go:28] Cannot validate kube-proxy config - no validator is available
	W1025 22:02:42.671533   16516 validation.go:28] Cannot validate kubelet config - no validator is available
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1025 22:02:44.194971   16516 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W1025 22:02:44.199953   16516 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.17.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
	W1025 22:02:42.671272   16516 validation.go:28] Cannot validate kube-proxy config - no validator is available
	W1025 22:02:42.671533   16516 validation.go:28] Cannot validate kubelet config - no validator is available
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1025 22:02:44.194971   16516 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W1025 22:02:44.199953   16516 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1025 22:06:44.235040  112102 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I1025 22:06:47.388987  112102 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force": (3.15391378s)
	I1025 22:06:47.389066  112102 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 22:06:47.404472  112102 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1025 22:06:47.423257  112102 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1025 22:06:47.423313  112102 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I1025 22:06:47.495864  112102 kubeadm.go:322] [init] Using Kubernetes version: v1.17.0
	I1025 22:06:47.496100  112102 kubeadm.go:322] [preflight] Running pre-flight checks
	I1025 22:06:47.845007  112102 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1025 22:06:47.845151  112102 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1025 22:06:47.845313  112102 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1025 22:06:48.234336  112102 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1025 22:06:48.234462  112102 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1025 22:06:48.234512  112102 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1025 22:06:48.371700  112102 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1025 22:06:48.374879  112102 out.go:204]   - Generating certificates and keys ...
	I1025 22:06:48.374996  112102 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1025 22:06:48.375078  112102 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1025 22:06:48.375170  112102 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1025 22:06:48.375243  112102 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1025 22:06:48.375324  112102 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1025 22:06:48.375394  112102 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1025 22:06:48.375470  112102 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1025 22:06:48.375545  112102 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1025 22:06:48.375635  112102 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1025 22:06:48.375730  112102 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1025 22:06:48.375776  112102 kubeadm.go:322] [certs] Using the existing "sa" key
	I1025 22:06:48.375847  112102 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1025 22:06:48.519858  112102 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1025 22:06:48.846730  112102 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1025 22:06:49.149765  112102 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1025 22:06:49.206915  112102 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1025 22:06:49.207891  112102 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1025 22:06:49.210037  112102 out.go:204]   - Booting up control plane ...
	I1025 22:06:49.210180  112102 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1025 22:06:49.218015  112102 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1025 22:06:49.219336  112102 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1025 22:06:49.220194  112102 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1025 22:06:49.224332  112102 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1025 22:07:29.226615  112102 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I1025 22:10:49.230747  112102 kubeadm.go:322] 
	I1025 22:10:49.230818  112102 kubeadm.go:322] Unfortunately, an error has occurred:
	I1025 22:10:49.230870  112102 kubeadm.go:322] 	timed out waiting for the condition
	I1025 22:10:49.230883  112102 kubeadm.go:322] 
	I1025 22:10:49.230933  112102 kubeadm.go:322] This error is likely caused by:
	I1025 22:10:49.230980  112102 kubeadm.go:322] 	- The kubelet is not running
	I1025 22:10:49.231121  112102 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1025 22:10:49.231142  112102 kubeadm.go:322] 
	I1025 22:10:49.231267  112102 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1025 22:10:49.231312  112102 kubeadm.go:322] 	- 'systemctl status kubelet'
	I1025 22:10:49.231353  112102 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I1025 22:10:49.231365  112102 kubeadm.go:322] 
	I1025 22:10:49.231497  112102 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1025 22:10:49.231614  112102 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I1025 22:10:49.231714  112102 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I1025 22:10:49.231777  112102 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I1025 22:10:49.231874  112102 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I1025 22:10:49.231917  112102 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I1025 22:10:49.234948  112102 kubeadm.go:322] W1025 22:06:47.490375   26648 validation.go:28] Cannot validate kubelet config - no validator is available
	I1025 22:10:49.235094  112102 kubeadm.go:322] W1025 22:06:47.490532   26648 validation.go:28] Cannot validate kube-proxy config - no validator is available
	I1025 22:10:49.235307  112102 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I1025 22:10:49.235455  112102 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1025 22:10:49.235607  112102 kubeadm.go:322] W1025 22:06:49.213271   26648 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1025 22:10:49.235756  112102 kubeadm.go:322] W1025 22:06:49.214615   26648 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1025 22:10:49.235867  112102 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1025 22:10:49.235954  112102 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I1025 22:10:49.239352  112102 kubeadm.go:406] StartCluster complete in 12m57.848026362s
	I1025 22:10:49.239469  112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 22:10:49.299018  112102 logs.go:284] 1 containers: [72913cce086f]
	I1025 22:10:49.299094  112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 22:10:49.348618  112102 logs.go:284] 1 containers: [05d86b5157b9]
	I1025 22:10:49.348681  112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 22:10:49.392720  112102 logs.go:284] 0 containers: []
	W1025 22:10:49.392746  112102 logs.go:286] No container was found matching "coredns"
	I1025 22:10:49.392806  112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 22:10:49.458837  112102 logs.go:284] 1 containers: [4f8e9c9873a8]
	I1025 22:10:49.458936  112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 22:10:49.513201  112102 logs.go:284] 0 containers: []
	W1025 22:10:49.513230  112102 logs.go:286] No container was found matching "kube-proxy"
	I1025 22:10:49.513292  112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 22:10:49.573557  112102 logs.go:284] 2 containers: [977de49f1ea1 4f97f88c4d42]
	I1025 22:10:49.573653  112102 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 22:10:49.621005  112102 logs.go:284] 0 containers: []
	W1025 22:10:49.621028  112102 logs.go:286] No container was found matching "kindnet"
	I1025 22:10:49.621046  112102 logs.go:123] Gathering logs for kube-controller-manager [977de49f1ea1] ...
	I1025 22:10:49.621058  112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 977de49f1ea1"
	I1025 22:10:49.668886  112102 logs.go:123] Gathering logs for Docker ...
	I1025 22:10:49.668926  112102 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 22:10:49.765973  112102 logs.go:123] Gathering logs for container status ...
	I1025 22:10:49.766017  112102 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 22:10:49.828172  112102 logs.go:123] Gathering logs for dmesg ...
	I1025 22:10:49.828208  112102 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 22:10:49.854730  112102 logs.go:123] Gathering logs for describe nodes ...
	I1025 22:10:49.854765  112102 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 22:10:49.965289  112102 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	error: tls: private key does not match public key
	 output: 
	** stderr ** 
	error: tls: private key does not match public key
	
	** /stderr **
	I1025 22:10:49.965329  112102 logs.go:123] Gathering logs for kube-scheduler [4f8e9c9873a8] ...
	I1025 22:10:49.965348  112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f8e9c9873a8"
	I1025 22:10:50.109694  112102 logs.go:123] Gathering logs for kube-controller-manager [4f97f88c4d42] ...
	I1025 22:10:50.109737  112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f97f88c4d42"
	I1025 22:10:50.176695  112102 logs.go:123] Gathering logs for kubelet ...
	I1025 22:10:50.176738  112102 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1025 22:10:50.216154  112102 logs.go:138] Found kubelet problem: Oct 25 22:10:32 stopped-upgrade-634233 kubelet[1836]: E1025 22:10:32.398652    1836 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	W1025 22:10:50.256525  112102 logs.go:138] Found kubelet problem: Oct 25 22:10:45 stopped-upgrade-634233 kubelet[3111]: E1025 22:10:45.309286    3111 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	W1025 22:10:50.259352  112102 logs.go:138] Found kubelet problem: Oct 25 22:10:46 stopped-upgrade-634233 kubelet[3111]: E1025 22:10:46.288468    3111 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	I1025 22:10:50.269189  112102 logs.go:123] Gathering logs for kube-apiserver [72913cce086f] ...
	I1025 22:10:50.269249  112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72913cce086f"
	I1025 22:10:50.374720  112102 logs.go:123] Gathering logs for etcd [05d86b5157b9] ...
	I1025 22:10:50.374759  112102 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 05d86b5157b9"
	W1025 22:10:50.425714  112102 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.17.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
	W1025 22:06:47.490375   26648 validation.go:28] Cannot validate kubelet config - no validator is available
	W1025 22:06:47.490532   26648 validation.go:28] Cannot validate kube-proxy config - no validator is available
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1025 22:06:49.213271   26648 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W1025 22:06:49.214615   26648 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1025 22:10:50.425783  112102 out.go:239] * 
	* 
	W1025 22:10:50.425856  112102 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.17.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
	W1025 22:06:47.490375   26648 validation.go:28] Cannot validate kubelet config - no validator is available
	W1025 22:06:47.490532   26648 validation.go:28] Cannot validate kube-proxy config - no validator is available
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1025 22:06:49.213271   26648 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W1025 22:06:49.214615   26648 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.17.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
	W1025 22:06:47.490375   26648 validation.go:28] Cannot validate kubelet config - no validator is available
	W1025 22:06:47.490532   26648 validation.go:28] Cannot validate kube-proxy config - no validator is available
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1025 22:06:49.213271   26648 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W1025 22:06:49.214615   26648 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1025 22:10:50.425886  112102 out.go:239] * 
	* 
	W1025 22:10:50.427086  112102 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 22:10:50.429919  112102 out.go:177] X Problems detected in kubelet:
	I1025 22:10:50.431268  112102 out.go:177]   Oct 25 22:10:32 stopped-upgrade-634233 kubelet[1836]: E1025 22:10:32.398652    1836 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	I1025 22:10:50.432682  112102 out.go:177]   Oct 25 22:10:45 stopped-upgrade-634233 kubelet[3111]: E1025 22:10:45.309286    3111 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	I1025 22:10:50.434174  112102 out.go:177]   Oct 25 22:10:46 stopped-upgrade-634233 kubelet[3111]: E1025 22:10:46.288468    3111 pod_workers.go:191] Error syncing pod b74e0a0de07a18bd4c49aade69c681eb ("kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-stopped-upgrade-634233_kube-system(b74e0a0de07a18bd4c49aade69c681eb)"
	I1025 22:10:50.437400  112102 out.go:177] 
	W1025 22:10:50.438825  112102 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.17.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
	W1025 22:06:47.490375   26648 validation.go:28] Cannot validate kubelet config - no validator is available
	W1025 22:06:47.490532   26648 validation.go:28] Cannot validate kube-proxy config - no validator is available
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1025 22:06:49.213271   26648 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W1025 22:06:49.214615   26648 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.17.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
	W1025 22:06:47.490375   26648 validation.go:28] Cannot validate kubelet config - no validator is available
	W1025 22:06:47.490532   26648 validation.go:28] Cannot validate kube-proxy config - no validator is available
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1025 22:06:49.213271   26648 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W1025 22:06:49.214615   26648 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1025 22:10:50.438910  112102 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1025 22:10:50.438941  112102 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1025 22:10:50.440484  112102 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:213: upgrade from v1.6.2 to HEAD failed: out/minikube-linux-amd64 start -p stopped-upgrade-634233 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 : exit status 109
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (1037.86s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (1.97s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p old-k8s-version-820759 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p old-k8s-version-820759 "sudo crictl images -o json": exit status 1 (247.391959ms)

                                                
                                                
-- stdout --
	FATA[0000] validate service connection: validate CRI v1 image API for endpoint "unix:///var/run/dockershim.sock": rpc error: code = Unimplemented desc = unknown service runtime.v1.ImageService 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
start_stop_delete_test.go:304: failed to get images inside minikube. args "out/minikube-linux-amd64 ssh -p old-k8s-version-820759 \"sudo crictl images -o json\"": exit status 1
start_stop_delete_test.go:304: failed to decode images json invalid character '\x1b' looking for beginning of value. output:
FATA[0000] validate service connection: validate CRI v1 image API for endpoint "unix:///var/run/dockershim.sock": rpc error: code = Unimplemented desc = unknown service runtime.v1.ImageService 
start_stop_delete_test.go:304: v1.16.0 images missing (-want +got):
[]string{
- 	"k8s.gcr.io/coredns:1.6.2",
- 	"k8s.gcr.io/etcd:3.3.15-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.16.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.16.0",
- 	"k8s.gcr.io/kube-proxy:v1.16.0",
- 	"k8s.gcr.io/kube-scheduler:v1.16.0",
- 	"k8s.gcr.io/pause:3.1",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-820759 -n old-k8s-version-820759
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-820759 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-820759 logs -n 25: (1.010720559s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable dashboard -p no-preload-252683                  | no-preload-252683            | jenkins | v1.31.2 | 25 Oct 23 22:13 UTC | 25 Oct 23 22:13 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-252683                                   | no-preload-252683            | jenkins | v1.31.2 | 25 Oct 23 22:13 UTC | 25 Oct 23 22:19 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-847378       | default-k8s-diff-port-847378 | jenkins | v1.31.2 | 25 Oct 23 22:13 UTC | 25 Oct 23 22:13 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-847378 | jenkins | v1.31.2 | 25 Oct 23 22:13 UTC | 25 Oct 23 22:20 UTC |
	|         | default-k8s-diff-port-847378                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.3                           |                              |         |         |                     |                     |
	| ssh     | -p embed-certs-475300 sudo                             | embed-certs-475300           | jenkins | v1.31.2 | 25 Oct 23 22:19 UTC | 25 Oct 23 22:19 UTC |
	|         | crictl images -o json                                  |                              |         |         |                     |                     |
	| pause   | -p embed-certs-475300                                  | embed-certs-475300           | jenkins | v1.31.2 | 25 Oct 23 22:19 UTC | 25 Oct 23 22:19 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p embed-certs-475300                                  | embed-certs-475300           | jenkins | v1.31.2 | 25 Oct 23 22:19 UTC | 25 Oct 23 22:19 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p embed-certs-475300                                  | embed-certs-475300           | jenkins | v1.31.2 | 25 Oct 23 22:19 UTC | 25 Oct 23 22:19 UTC |
	| delete  | -p embed-certs-475300                                  | embed-certs-475300           | jenkins | v1.31.2 | 25 Oct 23 22:19 UTC | 25 Oct 23 22:19 UTC |
	| start   | -p newest-cni-506800 --memory=2200 --alsologtostderr   | newest-cni-506800            | jenkins | v1.31.2 | 25 Oct 23 22:19 UTC | 25 Oct 23 22:20 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --kubernetes-version=v1.28.3            |                              |         |         |                     |                     |
	| ssh     | -p no-preload-252683 sudo                              | no-preload-252683            | jenkins | v1.31.2 | 25 Oct 23 22:20 UTC | 25 Oct 23 22:20 UTC |
	|         | crictl images -o json                                  |                              |         |         |                     |                     |
	| pause   | -p no-preload-252683                                   | no-preload-252683            | jenkins | v1.31.2 | 25 Oct 23 22:20 UTC | 25 Oct 23 22:20 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p no-preload-252683                                   | no-preload-252683            | jenkins | v1.31.2 | 25 Oct 23 22:20 UTC | 25 Oct 23 22:20 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p no-preload-252683                                   | no-preload-252683            | jenkins | v1.31.2 | 25 Oct 23 22:20 UTC | 25 Oct 23 22:20 UTC |
	| delete  | -p no-preload-252683                                   | no-preload-252683            | jenkins | v1.31.2 | 25 Oct 23 22:20 UTC | 25 Oct 23 22:20 UTC |
	| ssh     | -p                                                     | default-k8s-diff-port-847378 | jenkins | v1.31.2 | 25 Oct 23 22:20 UTC | 25 Oct 23 22:20 UTC |
	|         | default-k8s-diff-port-847378                           |                              |         |         |                     |                     |
	|         | sudo crictl images -o json                             |                              |         |         |                     |                     |
	| pause   | -p                                                     | default-k8s-diff-port-847378 | jenkins | v1.31.2 | 25 Oct 23 22:20 UTC | 25 Oct 23 22:20 UTC |
	|         | default-k8s-diff-port-847378                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p                                                     | default-k8s-diff-port-847378 | jenkins | v1.31.2 | 25 Oct 23 22:20 UTC | 25 Oct 23 22:20 UTC |
	|         | default-k8s-diff-port-847378                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-847378 | jenkins | v1.31.2 | 25 Oct 23 22:20 UTC | 25 Oct 23 22:20 UTC |
	|         | default-k8s-diff-port-847378                           |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-847378 | jenkins | v1.31.2 | 25 Oct 23 22:20 UTC | 25 Oct 23 22:20 UTC |
	|         | default-k8s-diff-port-847378                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-506800             | newest-cni-506800            | jenkins | v1.31.2 | 25 Oct 23 22:20 UTC | 25 Oct 23 22:20 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-506800                                   | newest-cni-506800            | jenkins | v1.31.2 | 25 Oct 23 22:20 UTC | 25 Oct 23 22:21 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-506800                  | newest-cni-506800            | jenkins | v1.31.2 | 25 Oct 23 22:21 UTC | 25 Oct 23 22:21 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-506800 --memory=2200 --alsologtostderr   | newest-cni-506800            | jenkins | v1.31.2 | 25 Oct 23 22:21 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --kubernetes-version=v1.28.3            |                              |         |         |                     |                     |
	| ssh     | -p old-k8s-version-820759 sudo                         | old-k8s-version-820759       | jenkins | v1.31.2 | 25 Oct 23 22:21 UTC |                     |
	|         | crictl images -o json                                  |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/25 22:21:05
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.21.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 22:21:05.007654  138917 out.go:296] Setting OutFile to fd 1 ...
	I1025 22:21:05.007936  138917 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 22:21:05.007946  138917 out.go:309] Setting ErrFile to fd 2...
	I1025 22:21:05.007951  138917 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 22:21:05.008130  138917 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17488-80960/.minikube/bin
	I1025 22:21:05.008709  138917 out.go:303] Setting JSON to false
	I1025 22:21:05.009662  138917 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":14600,"bootTime":1698257865,"procs":211,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1025 22:21:05.009725  138917 start.go:138] virtualization: kvm guest
	I1025 22:21:05.012185  138917 out.go:177] * [newest-cni-506800] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I1025 22:21:05.013794  138917 out.go:177]   - MINIKUBE_LOCATION=17488
	I1025 22:21:05.015270  138917 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 22:21:05.013879  138917 notify.go:220] Checking for updates...
	I1025 22:21:05.017980  138917 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17488-80960/kubeconfig
	I1025 22:21:05.019549  138917 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17488-80960/.minikube
	I1025 22:21:05.020833  138917 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1025 22:21:05.022143  138917 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 22:21:05.023780  138917 config.go:182] Loaded profile config "newest-cni-506800": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1025 22:21:05.024266  138917 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1025 22:21:05.024334  138917 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 22:21:05.039392  138917 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38667
	I1025 22:21:05.039959  138917 main.go:141] libmachine: () Calling .GetVersion
	I1025 22:21:05.040541  138917 main.go:141] libmachine: Using API Version  1
	I1025 22:21:05.040564  138917 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 22:21:05.040889  138917 main.go:141] libmachine: () Calling .GetMachineName
	I1025 22:21:05.041060  138917 main.go:141] libmachine: (newest-cni-506800) Calling .DriverName
	I1025 22:21:05.041327  138917 driver.go:378] Setting default libvirt URI to qemu:///system
	I1025 22:21:05.041586  138917 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1025 22:21:05.041618  138917 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 22:21:05.055075  138917 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41749
	I1025 22:21:05.055413  138917 main.go:141] libmachine: () Calling .GetVersion
	I1025 22:21:05.055900  138917 main.go:141] libmachine: Using API Version  1
	I1025 22:21:05.055919  138917 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 22:21:05.056210  138917 main.go:141] libmachine: () Calling .GetMachineName
	I1025 22:21:05.056371  138917 main.go:141] libmachine: (newest-cni-506800) Calling .DriverName
	I1025 22:21:05.089423  138917 out.go:177] * Using the kvm2 driver based on existing profile
	I1025 22:21:05.090858  138917 start.go:298] selected driver: kvm2
	I1025 22:21:05.090873  138917 start.go:902] validating driver "kvm2" against &{Name:newest-cni-506800 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.3 ClusterName:newest-cni-506800 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.115 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready
:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1025 22:21:05.090999  138917 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 22:21:05.091716  138917 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 22:21:05.091810  138917 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17488-80960/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1025 22:21:05.105675  138917 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I1025 22:21:05.106177  138917 start_flags.go:945] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1025 22:21:05.106282  138917 cni.go:84] Creating CNI manager for ""
	I1025 22:21:05.106306  138917 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1025 22:21:05.106318  138917 start_flags.go:323] config:
	{Name:newest-cni-506800 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:newest-cni-506800 Namespace:default AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.115 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPort
s:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1025 22:21:05.106553  138917 iso.go:125] acquiring lock: {Name:mk6659ecb6ed7b24fa2ae65bc0b8e3b5916d75e9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 22:21:05.108382  138917 out.go:177] * Starting control plane node newest-cni-506800 in cluster newest-cni-506800
	I1025 22:21:05.109766  138917 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
	I1025 22:21:05.109808  138917 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17488-80960/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4
	I1025 22:21:05.109824  138917 cache.go:56] Caching tarball of preloaded images
	I1025 22:21:05.109921  138917 preload.go:174] Found /home/jenkins/minikube-integration/17488-80960/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1025 22:21:05.109937  138917 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on docker
	I1025 22:21:05.110093  138917 profile.go:148] Saving config to /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/newest-cni-506800/config.json ...
	I1025 22:21:05.110353  138917 start.go:365] acquiring machines lock for newest-cni-506800: {Name:mk84b47429efad52c9c4eeca04f7cb6277d41bb4 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 22:21:05.110405  138917 start.go:369] acquired machines lock for "newest-cni-506800" in 30.043µs
	I1025 22:21:05.110426  138917 start.go:96] Skipping create...Using existing machine configuration
	I1025 22:21:05.110437  138917 fix.go:54] fixHost starting: 
	I1025 22:21:05.110765  138917 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1025 22:21:05.110805  138917 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 22:21:05.124307  138917 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46547
	I1025 22:21:05.124717  138917 main.go:141] libmachine: () Calling .GetVersion
	I1025 22:21:05.125165  138917 main.go:141] libmachine: Using API Version  1
	I1025 22:21:05.125188  138917 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 22:21:05.125491  138917 main.go:141] libmachine: () Calling .GetMachineName
	I1025 22:21:05.125681  138917 main.go:141] libmachine: (newest-cni-506800) Calling .DriverName
	I1025 22:21:05.125833  138917 main.go:141] libmachine: (newest-cni-506800) Calling .GetState
	I1025 22:21:05.127458  138917 fix.go:102] recreateIfNeeded on newest-cni-506800: state=Stopped err=<nil>
	I1025 22:21:05.127484  138917 main.go:141] libmachine: (newest-cni-506800) Calling .DriverName
	W1025 22:21:05.127670  138917 fix.go:128] unexpected machine state, will restart: <nil>
	I1025 22:21:05.129707  138917 out.go:177] * Restarting existing kvm2 VM for "newest-cni-506800" ...
	I1025 22:21:01.867526  135028 system_pods.go:86] 7 kube-system pods found
	I1025 22:21:01.867555  135028 system_pods.go:89] "coredns-5644d7b6d9-d26x9" [378324a4-b86d-4873-a9b1-5d4a7f15843f] Running
	I1025 22:21:01.867562  135028 system_pods.go:89] "kube-apiserver-old-k8s-version-820759" [258e67ef-f26e-4171-aea8-22f915d0c440] Pending
	I1025 22:21:01.867565  135028 system_pods.go:89] "kube-controller-manager-old-k8s-version-820759" [efb9bbf0-e675-48e5-9c2a-458850a2581a] Running
	I1025 22:21:01.867570  135028 system_pods.go:89] "kube-proxy-7dhp5" [1b896127-891f-4968-991c-446dffbdc667] Running
	I1025 22:21:01.867574  135028 system_pods.go:89] "kube-scheduler-old-k8s-version-820759" [d0695137-87e0-48ef-87e4-b6c245a6a8d9] Pending
	I1025 22:21:01.867581  135028 system_pods.go:89] "metrics-server-74d5856cc6-c7s5p" [1b4df6e5-c51d-42bf-bff1-a4271ca59446] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1025 22:21:01.867588  135028 system_pods.go:89] "storage-provisioner" [f3b29f55-a4dd-4ca9-beca-adce87e76f8a] Running
	I1025 22:21:01.867606  135028 retry.go:31] will retry after 8.710993241s: missing components: etcd, kube-apiserver, kube-scheduler
	I1025 22:21:05.131134  138917 main.go:141] libmachine: (newest-cni-506800) Calling .Start
	I1025 22:21:05.131317  138917 main.go:141] libmachine: (newest-cni-506800) Ensuring networks are active...
	I1025 22:21:05.132045  138917 main.go:141] libmachine: (newest-cni-506800) Ensuring network default is active
	I1025 22:21:05.132354  138917 main.go:141] libmachine: (newest-cni-506800) Ensuring network mk-newest-cni-506800 is active
	I1025 22:21:05.132866  138917 main.go:141] libmachine: (newest-cni-506800) Getting domain xml...
	I1025 22:21:05.133718  138917 main.go:141] libmachine: (newest-cni-506800) Creating domain...
	I1025 22:21:06.377900  138917 main.go:141] libmachine: (newest-cni-506800) Waiting to get IP...
	I1025 22:21:06.379041  138917 main.go:141] libmachine: (newest-cni-506800) DBG | domain newest-cni-506800 has defined MAC address 52:54:00:76:76:b5 in network mk-newest-cni-506800
	I1025 22:21:06.379565  138917 main.go:141] libmachine: (newest-cni-506800) DBG | unable to find current IP address of domain newest-cni-506800 in network mk-newest-cni-506800
	I1025 22:21:06.379665  138917 main.go:141] libmachine: (newest-cni-506800) DBG | I1025 22:21:06.379557  138952 retry.go:31] will retry after 227.471088ms: waiting for machine to come up
	I1025 22:21:06.609365  138917 main.go:141] libmachine: (newest-cni-506800) DBG | domain newest-cni-506800 has defined MAC address 52:54:00:76:76:b5 in network mk-newest-cni-506800
	I1025 22:21:06.609821  138917 main.go:141] libmachine: (newest-cni-506800) DBG | unable to find current IP address of domain newest-cni-506800 in network mk-newest-cni-506800
	I1025 22:21:06.609847  138917 main.go:141] libmachine: (newest-cni-506800) DBG | I1025 22:21:06.609763  138952 retry.go:31] will retry after 299.367601ms: waiting for machine to come up
	I1025 22:21:06.910363  138917 main.go:141] libmachine: (newest-cni-506800) DBG | domain newest-cni-506800 has defined MAC address 52:54:00:76:76:b5 in network mk-newest-cni-506800
	I1025 22:21:06.910988  138917 main.go:141] libmachine: (newest-cni-506800) DBG | unable to find current IP address of domain newest-cni-506800 in network mk-newest-cni-506800
	I1025 22:21:06.911017  138917 main.go:141] libmachine: (newest-cni-506800) DBG | I1025 22:21:06.910950  138952 retry.go:31] will retry after 449.429718ms: waiting for machine to come up
	I1025 22:21:07.361563  138917 main.go:141] libmachine: (newest-cni-506800) DBG | domain newest-cni-506800 has defined MAC address 52:54:00:76:76:b5 in network mk-newest-cni-506800
	I1025 22:21:07.361974  138917 main.go:141] libmachine: (newest-cni-506800) DBG | unable to find current IP address of domain newest-cni-506800 in network mk-newest-cni-506800
	I1025 22:21:07.362019  138917 main.go:141] libmachine: (newest-cni-506800) DBG | I1025 22:21:07.361933  138952 retry.go:31] will retry after 583.306687ms: waiting for machine to come up
	I1025 22:21:07.946665  138917 main.go:141] libmachine: (newest-cni-506800) DBG | domain newest-cni-506800 has defined MAC address 52:54:00:76:76:b5 in network mk-newest-cni-506800
	I1025 22:21:07.947253  138917 main.go:141] libmachine: (newest-cni-506800) DBG | unable to find current IP address of domain newest-cni-506800 in network mk-newest-cni-506800
	I1025 22:21:07.947278  138917 main.go:141] libmachine: (newest-cni-506800) DBG | I1025 22:21:07.947204  138952 retry.go:31] will retry after 755.775315ms: waiting for machine to come up
	I1025 22:21:08.704268  138917 main.go:141] libmachine: (newest-cni-506800) DBG | domain newest-cni-506800 has defined MAC address 52:54:00:76:76:b5 in network mk-newest-cni-506800
	I1025 22:21:08.704792  138917 main.go:141] libmachine: (newest-cni-506800) DBG | unable to find current IP address of domain newest-cni-506800 in network mk-newest-cni-506800
	I1025 22:21:08.704823  138917 main.go:141] libmachine: (newest-cni-506800) DBG | I1025 22:21:08.704750  138952 retry.go:31] will retry after 880.997453ms: waiting for machine to come up
	I1025 22:21:09.587905  138917 main.go:141] libmachine: (newest-cni-506800) DBG | domain newest-cni-506800 has defined MAC address 52:54:00:76:76:b5 in network mk-newest-cni-506800
	I1025 22:21:09.588465  138917 main.go:141] libmachine: (newest-cni-506800) DBG | unable to find current IP address of domain newest-cni-506800 in network mk-newest-cni-506800
	I1025 22:21:09.588491  138917 main.go:141] libmachine: (newest-cni-506800) DBG | I1025 22:21:09.588429  138952 retry.go:31] will retry after 856.839738ms: waiting for machine to come up
	I1025 22:21:10.588812  135028 system_pods.go:86] 8 kube-system pods found
	I1025 22:21:10.588844  135028 system_pods.go:89] "coredns-5644d7b6d9-d26x9" [378324a4-b86d-4873-a9b1-5d4a7f15843f] Running
	I1025 22:21:10.588852  135028 system_pods.go:89] "etcd-old-k8s-version-820759" [09f6fe8c-371b-4805-b0a0-606adea0d876] Pending
	I1025 22:21:10.588859  135028 system_pods.go:89] "kube-apiserver-old-k8s-version-820759" [258e67ef-f26e-4171-aea8-22f915d0c440] Running
	I1025 22:21:10.588867  135028 system_pods.go:89] "kube-controller-manager-old-k8s-version-820759" [efb9bbf0-e675-48e5-9c2a-458850a2581a] Running
	I1025 22:21:10.588874  135028 system_pods.go:89] "kube-proxy-7dhp5" [1b896127-891f-4968-991c-446dffbdc667] Running
	I1025 22:21:10.588881  135028 system_pods.go:89] "kube-scheduler-old-k8s-version-820759" [d0695137-87e0-48ef-87e4-b6c245a6a8d9] Running
	I1025 22:21:10.588891  135028 system_pods.go:89] "metrics-server-74d5856cc6-c7s5p" [1b4df6e5-c51d-42bf-bff1-a4271ca59446] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1025 22:21:10.588903  135028 system_pods.go:89] "storage-provisioner" [f3b29f55-a4dd-4ca9-beca-adce87e76f8a] Running
	I1025 22:21:10.588922  135028 retry.go:31] will retry after 9.653917091s: missing components: etcd
	I1025 22:21:10.447077  138917 main.go:141] libmachine: (newest-cni-506800) DBG | domain newest-cni-506800 has defined MAC address 52:54:00:76:76:b5 in network mk-newest-cni-506800
	I1025 22:21:10.447632  138917 main.go:141] libmachine: (newest-cni-506800) DBG | unable to find current IP address of domain newest-cni-506800 in network mk-newest-cni-506800
	I1025 22:21:10.447664  138917 main.go:141] libmachine: (newest-cni-506800) DBG | I1025 22:21:10.447555  138952 retry.go:31] will retry after 1.370936627s: waiting for machine to come up
	I1025 22:21:11.820256  138917 main.go:141] libmachine: (newest-cni-506800) DBG | domain newest-cni-506800 has defined MAC address 52:54:00:76:76:b5 in network mk-newest-cni-506800
	I1025 22:21:11.820746  138917 main.go:141] libmachine: (newest-cni-506800) DBG | unable to find current IP address of domain newest-cni-506800 in network mk-newest-cni-506800
	I1025 22:21:11.820779  138917 main.go:141] libmachine: (newest-cni-506800) DBG | I1025 22:21:11.820697  138952 retry.go:31] will retry after 1.134240067s: waiting for machine to come up
	I1025 22:21:12.957050  138917 main.go:141] libmachine: (newest-cni-506800) DBG | domain newest-cni-506800 has defined MAC address 52:54:00:76:76:b5 in network mk-newest-cni-506800
	I1025 22:21:12.957506  138917 main.go:141] libmachine: (newest-cni-506800) DBG | unable to find current IP address of domain newest-cni-506800 in network mk-newest-cni-506800
	I1025 22:21:12.957540  138917 main.go:141] libmachine: (newest-cni-506800) DBG | I1025 22:21:12.957450  138952 retry.go:31] will retry after 1.435423229s: waiting for machine to come up
	I1025 22:21:14.394267  138917 main.go:141] libmachine: (newest-cni-506800) DBG | domain newest-cni-506800 has defined MAC address 52:54:00:76:76:b5 in network mk-newest-cni-506800
	I1025 22:21:14.394788  138917 main.go:141] libmachine: (newest-cni-506800) DBG | unable to find current IP address of domain newest-cni-506800 in network mk-newest-cni-506800
	I1025 22:21:14.394821  138917 main.go:141] libmachine: (newest-cni-506800) DBG | I1025 22:21:14.394688  138952 retry.go:31] will retry after 1.884168316s: waiting for machine to come up
	I1025 22:21:16.280875  138917 main.go:141] libmachine: (newest-cni-506800) DBG | domain newest-cni-506800 has defined MAC address 52:54:00:76:76:b5 in network mk-newest-cni-506800
	I1025 22:21:16.281381  138917 main.go:141] libmachine: (newest-cni-506800) DBG | unable to find current IP address of domain newest-cni-506800 in network mk-newest-cni-506800
	I1025 22:21:16.281418  138917 main.go:141] libmachine: (newest-cni-506800) DBG | I1025 22:21:16.281309  138952 retry.go:31] will retry after 3.524089367s: waiting for machine to come up
	I1025 22:21:19.806577  138917 main.go:141] libmachine: (newest-cni-506800) DBG | domain newest-cni-506800 has defined MAC address 52:54:00:76:76:b5 in network mk-newest-cni-506800
	I1025 22:21:19.807106  138917 main.go:141] libmachine: (newest-cni-506800) DBG | unable to find current IP address of domain newest-cni-506800 in network mk-newest-cni-506800
	I1025 22:21:19.807151  138917 main.go:141] libmachine: (newest-cni-506800) DBG | I1025 22:21:19.807048  138952 retry.go:31] will retry after 4.028417514s: waiting for machine to come up
	I1025 22:21:20.250382  135028 system_pods.go:86] 8 kube-system pods found
	I1025 22:21:20.250406  135028 system_pods.go:89] "coredns-5644d7b6d9-d26x9" [378324a4-b86d-4873-a9b1-5d4a7f15843f] Running
	I1025 22:21:20.250412  135028 system_pods.go:89] "etcd-old-k8s-version-820759" [09f6fe8c-371b-4805-b0a0-606adea0d876] Running
	I1025 22:21:20.250416  135028 system_pods.go:89] "kube-apiserver-old-k8s-version-820759" [258e67ef-f26e-4171-aea8-22f915d0c440] Running
	I1025 22:21:20.250421  135028 system_pods.go:89] "kube-controller-manager-old-k8s-version-820759" [efb9bbf0-e675-48e5-9c2a-458850a2581a] Running
	I1025 22:21:20.250424  135028 system_pods.go:89] "kube-proxy-7dhp5" [1b896127-891f-4968-991c-446dffbdc667] Running
	I1025 22:21:20.250428  135028 system_pods.go:89] "kube-scheduler-old-k8s-version-820759" [d0695137-87e0-48ef-87e4-b6c245a6a8d9] Running
	I1025 22:21:20.250434  135028 system_pods.go:89] "metrics-server-74d5856cc6-c7s5p" [1b4df6e5-c51d-42bf-bff1-a4271ca59446] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1025 22:21:20.250442  135028 system_pods.go:89] "storage-provisioner" [f3b29f55-a4dd-4ca9-beca-adce87e76f8a] Running
	I1025 22:21:20.250452  135028 system_pods.go:126] duration metric: took 44.970370706s to wait for k8s-apps to be running ...
	I1025 22:21:20.250462  135028 system_svc.go:44] waiting for kubelet service to be running ....
	I1025 22:21:20.250507  135028 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 22:21:20.265292  135028 system_svc.go:56] duration metric: took 14.812453ms WaitForService to wait for kubelet.
	I1025 22:21:20.265321  135028 kubeadm.go:581] duration metric: took 1m15.762591509s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1025 22:21:20.265342  135028 node_conditions.go:102] verifying NodePressure condition ...
	I1025 22:21:20.268177  135028 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1025 22:21:20.268196  135028 node_conditions.go:123] node cpu capacity is 2
	I1025 22:21:20.268208  135028 node_conditions.go:105] duration metric: took 2.862159ms to run NodePressure ...
	I1025 22:21:20.268237  135028 start.go:228] waiting for startup goroutines ...
	I1025 22:21:20.268250  135028 start.go:233] waiting for cluster config update ...
	I1025 22:21:20.268260  135028 start.go:242] writing updated cluster config ...
	I1025 22:21:20.268519  135028 ssh_runner.go:195] Run: rm -f paused
	I1025 22:21:20.327822  135028 start.go:600] kubectl: 1.28.3, cluster: 1.16.0 (minor skew: 12)
	I1025 22:21:20.329635  135028 out.go:177] 
	W1025 22:21:20.331166  135028 out.go:239] ! /usr/local/bin/kubectl is version 1.28.3, which may have incompatibilities with Kubernetes 1.16.0.
	I1025 22:21:20.332736  135028 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I1025 22:21:20.334591  135028 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-820759" cluster and "default" namespace by default
	I1025 22:21:23.840005  138917 main.go:141] libmachine: (newest-cni-506800) DBG | domain newest-cni-506800 has defined MAC address 52:54:00:76:76:b5 in network mk-newest-cni-506800
	I1025 22:21:23.840565  138917 main.go:141] libmachine: (newest-cni-506800) Found IP for machine: 192.168.61.115
	I1025 22:21:23.840607  138917 main.go:141] libmachine: (newest-cni-506800) DBG | domain newest-cni-506800 has current primary IP address 192.168.61.115 and MAC address 52:54:00:76:76:b5 in network mk-newest-cni-506800
	I1025 22:21:23.840628  138917 main.go:141] libmachine: (newest-cni-506800) Reserving static IP address...
	I1025 22:21:23.841101  138917 main.go:141] libmachine: (newest-cni-506800) DBG | found host DHCP lease matching {name: "newest-cni-506800", mac: "52:54:00:76:76:b5", ip: "192.168.61.115"} in network mk-newest-cni-506800: {Iface:virbr3 ExpiryTime:2023-10-25 23:19:49 +0000 UTC Type:0 Mac:52:54:00:76:76:b5 Iaid: IPaddr:192.168.61.115 Prefix:24 Hostname:newest-cni-506800 Clientid:01:52:54:00:76:76:b5}
	I1025 22:21:23.841134  138917 main.go:141] libmachine: (newest-cni-506800) Reserved static IP address: 192.168.61.115
	I1025 22:21:23.841151  138917 main.go:141] libmachine: (newest-cni-506800) DBG | skip adding static IP to network mk-newest-cni-506800 - found existing host DHCP lease matching {name: "newest-cni-506800", mac: "52:54:00:76:76:b5", ip: "192.168.61.115"}
	I1025 22:21:23.841166  138917 main.go:141] libmachine: (newest-cni-506800) DBG | Getting to WaitForSSH function...
	I1025 22:21:23.841175  138917 main.go:141] libmachine: (newest-cni-506800) Waiting for SSH to be available...
	I1025 22:21:23.843291  138917 main.go:141] libmachine: (newest-cni-506800) DBG | domain newest-cni-506800 has defined MAC address 52:54:00:76:76:b5 in network mk-newest-cni-506800
	I1025 22:21:23.843629  138917 main.go:141] libmachine: (newest-cni-506800) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:76:b5", ip: ""} in network mk-newest-cni-506800: {Iface:virbr3 ExpiryTime:2023-10-25 23:19:49 +0000 UTC Type:0 Mac:52:54:00:76:76:b5 Iaid: IPaddr:192.168.61.115 Prefix:24 Hostname:newest-cni-506800 Clientid:01:52:54:00:76:76:b5}
	I1025 22:21:23.843674  138917 main.go:141] libmachine: (newest-cni-506800) DBG | domain newest-cni-506800 has defined IP address 192.168.61.115 and MAC address 52:54:00:76:76:b5 in network mk-newest-cni-506800
	I1025 22:21:23.843799  138917 main.go:141] libmachine: (newest-cni-506800) DBG | Using SSH client type: external
	I1025 22:21:23.843827  138917 main.go:141] libmachine: (newest-cni-506800) DBG | Using SSH private key: /home/jenkins/minikube-integration/17488-80960/.minikube/machines/newest-cni-506800/id_rsa (-rw-------)
	I1025 22:21:23.843879  138917 main.go:141] libmachine: (newest-cni-506800) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.115 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17488-80960/.minikube/machines/newest-cni-506800/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1025 22:21:23.843901  138917 main.go:141] libmachine: (newest-cni-506800) DBG | About to run SSH command:
	I1025 22:21:23.843916  138917 main.go:141] libmachine: (newest-cni-506800) DBG | exit 0
	I1025 22:21:23.932387  138917 main.go:141] libmachine: (newest-cni-506800) DBG | SSH cmd err, output: <nil>: 
	I1025 22:21:23.932791  138917 main.go:141] libmachine: (newest-cni-506800) Calling .GetConfigRaw
	I1025 22:21:23.933525  138917 main.go:141] libmachine: (newest-cni-506800) Calling .GetIP
	I1025 22:21:23.936237  138917 main.go:141] libmachine: (newest-cni-506800) DBG | domain newest-cni-506800 has defined MAC address 52:54:00:76:76:b5 in network mk-newest-cni-506800
	I1025 22:21:23.936606  138917 main.go:141] libmachine: (newest-cni-506800) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:76:b5", ip: ""} in network mk-newest-cni-506800: {Iface:virbr3 ExpiryTime:2023-10-25 23:19:49 +0000 UTC Type:0 Mac:52:54:00:76:76:b5 Iaid: IPaddr:192.168.61.115 Prefix:24 Hostname:newest-cni-506800 Clientid:01:52:54:00:76:76:b5}
	I1025 22:21:23.936639  138917 main.go:141] libmachine: (newest-cni-506800) DBG | domain newest-cni-506800 has defined IP address 192.168.61.115 and MAC address 52:54:00:76:76:b5 in network mk-newest-cni-506800
	I1025 22:21:23.936831  138917 profile.go:148] Saving config to /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/newest-cni-506800/config.json ...
	I1025 22:21:23.937065  138917 machine.go:88] provisioning docker machine ...
	I1025 22:21:23.937086  138917 main.go:141] libmachine: (newest-cni-506800) Calling .DriverName
	I1025 22:21:23.937305  138917 main.go:141] libmachine: (newest-cni-506800) Calling .GetMachineName
	I1025 22:21:23.937503  138917 buildroot.go:166] provisioning hostname "newest-cni-506800"
	I1025 22:21:23.937525  138917 main.go:141] libmachine: (newest-cni-506800) Calling .GetMachineName
	I1025 22:21:23.937668  138917 main.go:141] libmachine: (newest-cni-506800) Calling .GetSSHHostname
	I1025 22:21:23.940307  138917 main.go:141] libmachine: (newest-cni-506800) DBG | domain newest-cni-506800 has defined MAC address 52:54:00:76:76:b5 in network mk-newest-cni-506800
	I1025 22:21:23.940724  138917 main.go:141] libmachine: (newest-cni-506800) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:76:b5", ip: ""} in network mk-newest-cni-506800: {Iface:virbr3 ExpiryTime:2023-10-25 23:19:49 +0000 UTC Type:0 Mac:52:54:00:76:76:b5 Iaid: IPaddr:192.168.61.115 Prefix:24 Hostname:newest-cni-506800 Clientid:01:52:54:00:76:76:b5}
	I1025 22:21:23.940767  138917 main.go:141] libmachine: (newest-cni-506800) DBG | domain newest-cni-506800 has defined IP address 192.168.61.115 and MAC address 52:54:00:76:76:b5 in network mk-newest-cni-506800
	I1025 22:21:23.940893  138917 main.go:141] libmachine: (newest-cni-506800) Calling .GetSSHPort
	I1025 22:21:23.941074  138917 main.go:141] libmachine: (newest-cni-506800) Calling .GetSSHKeyPath
	I1025 22:21:23.941214  138917 main.go:141] libmachine: (newest-cni-506800) Calling .GetSSHKeyPath
	I1025 22:21:23.941361  138917 main.go:141] libmachine: (newest-cni-506800) Calling .GetSSHUsername
	I1025 22:21:23.941540  138917 main.go:141] libmachine: Using SSH client type: native
	I1025 22:21:23.941881  138917 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.61.115 22 <nil> <nil>}
	I1025 22:21:23.941894  138917 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-506800 && echo "newest-cni-506800" | sudo tee /etc/hostname
	I1025 22:21:24.069154  138917 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-506800
	
	I1025 22:21:24.069187  138917 main.go:141] libmachine: (newest-cni-506800) Calling .GetSSHHostname
	I1025 22:21:24.072249  138917 main.go:141] libmachine: (newest-cni-506800) DBG | domain newest-cni-506800 has defined MAC address 52:54:00:76:76:b5 in network mk-newest-cni-506800
	I1025 22:21:24.072542  138917 main.go:141] libmachine: (newest-cni-506800) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:76:b5", ip: ""} in network mk-newest-cni-506800: {Iface:virbr3 ExpiryTime:2023-10-25 23:19:49 +0000 UTC Type:0 Mac:52:54:00:76:76:b5 Iaid: IPaddr:192.168.61.115 Prefix:24 Hostname:newest-cni-506800 Clientid:01:52:54:00:76:76:b5}
	I1025 22:21:24.072589  138917 main.go:141] libmachine: (newest-cni-506800) DBG | domain newest-cni-506800 has defined IP address 192.168.61.115 and MAC address 52:54:00:76:76:b5 in network mk-newest-cni-506800
	I1025 22:21:24.072719  138917 main.go:141] libmachine: (newest-cni-506800) Calling .GetSSHPort
	I1025 22:21:24.072980  138917 main.go:141] libmachine: (newest-cni-506800) Calling .GetSSHKeyPath
	I1025 22:21:24.073141  138917 main.go:141] libmachine: (newest-cni-506800) Calling .GetSSHKeyPath
	I1025 22:21:24.073248  138917 main.go:141] libmachine: (newest-cni-506800) Calling .GetSSHUsername
	I1025 22:21:24.073385  138917 main.go:141] libmachine: Using SSH client type: native
	I1025 22:21:24.073788  138917 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.61.115 22 <nil> <nil>}
	I1025 22:21:24.073820  138917 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-506800' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-506800/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-506800' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 22:21:24.196507  138917 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 22:21:24.196556  138917 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17488-80960/.minikube CaCertPath:/home/jenkins/minikube-integration/17488-80960/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17488-80960/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17488-80960/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17488-80960/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17488-80960/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17488-80960/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17488-80960/.minikube}
	I1025 22:21:24.196615  138917 buildroot.go:174] setting up certificates
	I1025 22:21:24.196626  138917 provision.go:83] configureAuth start
	I1025 22:21:24.196637  138917 main.go:141] libmachine: (newest-cni-506800) Calling .GetMachineName
	I1025 22:21:24.196920  138917 main.go:141] libmachine: (newest-cni-506800) Calling .GetIP
	I1025 22:21:24.199457  138917 main.go:141] libmachine: (newest-cni-506800) DBG | domain newest-cni-506800 has defined MAC address 52:54:00:76:76:b5 in network mk-newest-cni-506800
	I1025 22:21:24.199790  138917 main.go:141] libmachine: (newest-cni-506800) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:76:b5", ip: ""} in network mk-newest-cni-506800: {Iface:virbr3 ExpiryTime:2023-10-25 23:19:49 +0000 UTC Type:0 Mac:52:54:00:76:76:b5 Iaid: IPaddr:192.168.61.115 Prefix:24 Hostname:newest-cni-506800 Clientid:01:52:54:00:76:76:b5}
	I1025 22:21:24.199821  138917 main.go:141] libmachine: (newest-cni-506800) DBG | domain newest-cni-506800 has defined IP address 192.168.61.115 and MAC address 52:54:00:76:76:b5 in network mk-newest-cni-506800
	I1025 22:21:24.199913  138917 main.go:141] libmachine: (newest-cni-506800) Calling .GetSSHHostname
	I1025 22:21:24.201824  138917 main.go:141] libmachine: (newest-cni-506800) DBG | domain newest-cni-506800 has defined MAC address 52:54:00:76:76:b5 in network mk-newest-cni-506800
	I1025 22:21:24.202091  138917 main.go:141] libmachine: (newest-cni-506800) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:76:b5", ip: ""} in network mk-newest-cni-506800: {Iface:virbr3 ExpiryTime:2023-10-25 23:19:49 +0000 UTC Type:0 Mac:52:54:00:76:76:b5 Iaid: IPaddr:192.168.61.115 Prefix:24 Hostname:newest-cni-506800 Clientid:01:52:54:00:76:76:b5}
	I1025 22:21:24.202125  138917 main.go:141] libmachine: (newest-cni-506800) DBG | domain newest-cni-506800 has defined IP address 192.168.61.115 and MAC address 52:54:00:76:76:b5 in network mk-newest-cni-506800
	I1025 22:21:24.202244  138917 provision.go:138] copyHostCerts
	I1025 22:21:24.202313  138917 exec_runner.go:144] found /home/jenkins/minikube-integration/17488-80960/.minikube/ca.pem, removing ...
	I1025 22:21:24.202336  138917 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17488-80960/.minikube/ca.pem
	I1025 22:21:24.202427  138917 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17488-80960/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17488-80960/.minikube/ca.pem (1082 bytes)
	I1025 22:21:24.202562  138917 exec_runner.go:144] found /home/jenkins/minikube-integration/17488-80960/.minikube/cert.pem, removing ...
	I1025 22:21:24.202574  138917 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17488-80960/.minikube/cert.pem
	I1025 22:21:24.202616  138917 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17488-80960/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17488-80960/.minikube/cert.pem (1123 bytes)
	I1025 22:21:24.202705  138917 exec_runner.go:144] found /home/jenkins/minikube-integration/17488-80960/.minikube/key.pem, removing ...
	I1025 22:21:24.202714  138917 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17488-80960/.minikube/key.pem
	I1025 22:21:24.202747  138917 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17488-80960/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17488-80960/.minikube/key.pem (1679 bytes)
	I1025 22:21:24.202824  138917 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17488-80960/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17488-80960/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17488-80960/.minikube/certs/ca-key.pem org=jenkins.newest-cni-506800 san=[192.168.61.115 192.168.61.115 localhost 127.0.0.1 minikube newest-cni-506800]
	I1025 22:21:24.295391  138917 provision.go:172] copyRemoteCerts
	I1025 22:21:24.295492  138917 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 22:21:24.295528  138917 main.go:141] libmachine: (newest-cni-506800) Calling .GetSSHHostname
	I1025 22:21:24.298285  138917 main.go:141] libmachine: (newest-cni-506800) DBG | domain newest-cni-506800 has defined MAC address 52:54:00:76:76:b5 in network mk-newest-cni-506800
	I1025 22:21:24.298602  138917 main.go:141] libmachine: (newest-cni-506800) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:76:b5", ip: ""} in network mk-newest-cni-506800: {Iface:virbr3 ExpiryTime:2023-10-25 23:19:49 +0000 UTC Type:0 Mac:52:54:00:76:76:b5 Iaid: IPaddr:192.168.61.115 Prefix:24 Hostname:newest-cni-506800 Clientid:01:52:54:00:76:76:b5}
	I1025 22:21:24.298643  138917 main.go:141] libmachine: (newest-cni-506800) DBG | domain newest-cni-506800 has defined IP address 192.168.61.115 and MAC address 52:54:00:76:76:b5 in network mk-newest-cni-506800
	I1025 22:21:24.298821  138917 main.go:141] libmachine: (newest-cni-506800) Calling .GetSSHPort
	I1025 22:21:24.299031  138917 main.go:141] libmachine: (newest-cni-506800) Calling .GetSSHKeyPath
	I1025 22:21:24.299200  138917 main.go:141] libmachine: (newest-cni-506800) Calling .GetSSHUsername
	I1025 22:21:24.299366  138917 sshutil.go:53] new ssh client: &{IP:192.168.61.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17488-80960/.minikube/machines/newest-cni-506800/id_rsa Username:docker}
	I1025 22:21:24.386786  138917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-80960/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1025 22:21:24.411584  138917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-80960/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1025 22:21:24.434819  138917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-80960/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1025 22:21:24.457066  138917 provision.go:86] duration metric: configureAuth took 260.426804ms
	I1025 22:21:24.457091  138917 buildroot.go:189] setting minikube options for container-runtime
	I1025 22:21:24.457273  138917 config.go:182] Loaded profile config "newest-cni-506800": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1025 22:21:24.457295  138917 main.go:141] libmachine: (newest-cni-506800) Calling .DriverName
	I1025 22:21:24.457617  138917 main.go:141] libmachine: (newest-cni-506800) Calling .GetSSHHostname
	I1025 22:21:24.460187  138917 main.go:141] libmachine: (newest-cni-506800) DBG | domain newest-cni-506800 has defined MAC address 52:54:00:76:76:b5 in network mk-newest-cni-506800
	I1025 22:21:24.460511  138917 main.go:141] libmachine: (newest-cni-506800) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:76:b5", ip: ""} in network mk-newest-cni-506800: {Iface:virbr3 ExpiryTime:2023-10-25 23:19:49 +0000 UTC Type:0 Mac:52:54:00:76:76:b5 Iaid: IPaddr:192.168.61.115 Prefix:24 Hostname:newest-cni-506800 Clientid:01:52:54:00:76:76:b5}
	I1025 22:21:24.460551  138917 main.go:141] libmachine: (newest-cni-506800) DBG | domain newest-cni-506800 has defined IP address 192.168.61.115 and MAC address 52:54:00:76:76:b5 in network mk-newest-cni-506800
	I1025 22:21:24.460709  138917 main.go:141] libmachine: (newest-cni-506800) Calling .GetSSHPort
	I1025 22:21:24.460926  138917 main.go:141] libmachine: (newest-cni-506800) Calling .GetSSHKeyPath
	I1025 22:21:24.461127  138917 main.go:141] libmachine: (newest-cni-506800) Calling .GetSSHKeyPath
	I1025 22:21:24.461304  138917 main.go:141] libmachine: (newest-cni-506800) Calling .GetSSHUsername
	I1025 22:21:24.461490  138917 main.go:141] libmachine: Using SSH client type: native
	I1025 22:21:24.462017  138917 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.61.115 22 <nil> <nil>}
	I1025 22:21:24.462036  138917 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1025 22:21:24.578281  138917 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1025 22:21:24.578307  138917 buildroot.go:70] root file system type: tmpfs
	I1025 22:21:24.578445  138917 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1025 22:21:24.578467  138917 main.go:141] libmachine: (newest-cni-506800) Calling .GetSSHHostname
	I1025 22:21:24.581360  138917 main.go:141] libmachine: (newest-cni-506800) DBG | domain newest-cni-506800 has defined MAC address 52:54:00:76:76:b5 in network mk-newest-cni-506800
	I1025 22:21:24.581751  138917 main.go:141] libmachine: (newest-cni-506800) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:76:b5", ip: ""} in network mk-newest-cni-506800: {Iface:virbr3 ExpiryTime:2023-10-25 23:19:49 +0000 UTC Type:0 Mac:52:54:00:76:76:b5 Iaid: IPaddr:192.168.61.115 Prefix:24 Hostname:newest-cni-506800 Clientid:01:52:54:00:76:76:b5}
	I1025 22:21:24.581784  138917 main.go:141] libmachine: (newest-cni-506800) DBG | domain newest-cni-506800 has defined IP address 192.168.61.115 and MAC address 52:54:00:76:76:b5 in network mk-newest-cni-506800
	I1025 22:21:24.581929  138917 main.go:141] libmachine: (newest-cni-506800) Calling .GetSSHPort
	I1025 22:21:24.582220  138917 main.go:141] libmachine: (newest-cni-506800) Calling .GetSSHKeyPath
	I1025 22:21:24.582402  138917 main.go:141] libmachine: (newest-cni-506800) Calling .GetSSHKeyPath
	I1025 22:21:24.582574  138917 main.go:141] libmachine: (newest-cni-506800) Calling .GetSSHUsername
	I1025 22:21:24.582744  138917 main.go:141] libmachine: Using SSH client type: native
	I1025 22:21:24.583090  138917 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.61.115 22 <nil> <nil>}
	I1025 22:21:24.583153  138917 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1025 22:21:24.709947  138917 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1025 22:21:24.709988  138917 main.go:141] libmachine: (newest-cni-506800) Calling .GetSSHHostname
	I1025 22:21:24.712665  138917 main.go:141] libmachine: (newest-cni-506800) DBG | domain newest-cni-506800 has defined MAC address 52:54:00:76:76:b5 in network mk-newest-cni-506800
	I1025 22:21:24.712987  138917 main.go:141] libmachine: (newest-cni-506800) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:76:b5", ip: ""} in network mk-newest-cni-506800: {Iface:virbr3 ExpiryTime:2023-10-25 23:19:49 +0000 UTC Type:0 Mac:52:54:00:76:76:b5 Iaid: IPaddr:192.168.61.115 Prefix:24 Hostname:newest-cni-506800 Clientid:01:52:54:00:76:76:b5}
	I1025 22:21:24.713023  138917 main.go:141] libmachine: (newest-cni-506800) DBG | domain newest-cni-506800 has defined IP address 192.168.61.115 and MAC address 52:54:00:76:76:b5 in network mk-newest-cni-506800
	I1025 22:21:24.713239  138917 main.go:141] libmachine: (newest-cni-506800) Calling .GetSSHPort
	I1025 22:21:24.713449  138917 main.go:141] libmachine: (newest-cni-506800) Calling .GetSSHKeyPath
	I1025 22:21:24.713638  138917 main.go:141] libmachine: (newest-cni-506800) Calling .GetSSHKeyPath
	I1025 22:21:24.713767  138917 main.go:141] libmachine: (newest-cni-506800) Calling .GetSSHUsername
	I1025 22:21:24.713942  138917 main.go:141] libmachine: Using SSH client type: native
	I1025 22:21:24.714435  138917 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.61.115 22 <nil> <nil>}
	I1025 22:21:24.714458  138917 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1025 22:21:25.575808  138917 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1025 22:21:25.575840  138917 machine.go:91] provisioned docker machine in 1.638756491s
	I1025 22:21:25.575856  138917 start.go:300] post-start starting for "newest-cni-506800" (driver="kvm2")
	I1025 22:21:25.575870  138917 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 22:21:25.575891  138917 main.go:141] libmachine: (newest-cni-506800) Calling .DriverName
	I1025 22:21:25.576271  138917 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 22:21:25.576317  138917 main.go:141] libmachine: (newest-cni-506800) Calling .GetSSHHostname
	I1025 22:21:25.579119  138917 main.go:141] libmachine: (newest-cni-506800) DBG | domain newest-cni-506800 has defined MAC address 52:54:00:76:76:b5 in network mk-newest-cni-506800
	I1025 22:21:25.579489  138917 main.go:141] libmachine: (newest-cni-506800) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:76:b5", ip: ""} in network mk-newest-cni-506800: {Iface:virbr3 ExpiryTime:2023-10-25 23:19:49 +0000 UTC Type:0 Mac:52:54:00:76:76:b5 Iaid: IPaddr:192.168.61.115 Prefix:24 Hostname:newest-cni-506800 Clientid:01:52:54:00:76:76:b5}
	I1025 22:21:25.579523  138917 main.go:141] libmachine: (newest-cni-506800) DBG | domain newest-cni-506800 has defined IP address 192.168.61.115 and MAC address 52:54:00:76:76:b5 in network mk-newest-cni-506800
	I1025 22:21:25.579672  138917 main.go:141] libmachine: (newest-cni-506800) Calling .GetSSHPort
	I1025 22:21:25.579888  138917 main.go:141] libmachine: (newest-cni-506800) Calling .GetSSHKeyPath
	I1025 22:21:25.580099  138917 main.go:141] libmachine: (newest-cni-506800) Calling .GetSSHUsername
	I1025 22:21:25.580267  138917 sshutil.go:53] new ssh client: &{IP:192.168.61.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17488-80960/.minikube/machines/newest-cni-506800/id_rsa Username:docker}
	I1025 22:21:25.671953  138917 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 22:21:25.676707  138917 info.go:137] Remote host: Buildroot 2021.02.12
	I1025 22:21:25.676728  138917 filesync.go:126] Scanning /home/jenkins/minikube-integration/17488-80960/.minikube/addons for local assets ...
	I1025 22:21:25.676791  138917 filesync.go:126] Scanning /home/jenkins/minikube-integration/17488-80960/.minikube/files for local assets ...
	I1025 22:21:25.676884  138917 filesync.go:149] local asset: /home/jenkins/minikube-integration/17488-80960/.minikube/files/etc/ssl/certs/882442.pem -> 882442.pem in /etc/ssl/certs
	I1025 22:21:25.677017  138917 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 22:21:25.687718  138917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-80960/.minikube/files/etc/ssl/certs/882442.pem --> /etc/ssl/certs/882442.pem (1708 bytes)
	I1025 22:21:25.711135  138917 start.go:303] post-start completed in 135.26254ms
	I1025 22:21:25.711158  138917 fix.go:56] fixHost completed within 20.600721076s
	I1025 22:21:25.711180  138917 main.go:141] libmachine: (newest-cni-506800) Calling .GetSSHHostname
	I1025 22:21:25.714186  138917 main.go:141] libmachine: (newest-cni-506800) DBG | domain newest-cni-506800 has defined MAC address 52:54:00:76:76:b5 in network mk-newest-cni-506800
	I1025 22:21:25.714577  138917 main.go:141] libmachine: (newest-cni-506800) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:76:b5", ip: ""} in network mk-newest-cni-506800: {Iface:virbr3 ExpiryTime:2023-10-25 23:19:49 +0000 UTC Type:0 Mac:52:54:00:76:76:b5 Iaid: IPaddr:192.168.61.115 Prefix:24 Hostname:newest-cni-506800 Clientid:01:52:54:00:76:76:b5}
	I1025 22:21:25.714611  138917 main.go:141] libmachine: (newest-cni-506800) DBG | domain newest-cni-506800 has defined IP address 192.168.61.115 and MAC address 52:54:00:76:76:b5 in network mk-newest-cni-506800
	I1025 22:21:25.714771  138917 main.go:141] libmachine: (newest-cni-506800) Calling .GetSSHPort
	I1025 22:21:25.714968  138917 main.go:141] libmachine: (newest-cni-506800) Calling .GetSSHKeyPath
	I1025 22:21:25.715137  138917 main.go:141] libmachine: (newest-cni-506800) Calling .GetSSHKeyPath
	I1025 22:21:25.715273  138917 main.go:141] libmachine: (newest-cni-506800) Calling .GetSSHUsername
	I1025 22:21:25.715417  138917 main.go:141] libmachine: Using SSH client type: native
	I1025 22:21:25.715886  138917 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f8240] 0x7faf20 <nil>  [] 0s} 192.168.61.115 22 <nil> <nil>}
	I1025 22:21:25.715902  138917 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1025 22:21:25.828872  138917 main.go:141] libmachine: SSH cmd err, output: <nil>: 1698272485.780506847
	
	I1025 22:21:25.828902  138917 fix.go:206] guest clock: 1698272485.780506847
	I1025 22:21:25.828910  138917 fix.go:219] Guest: 2023-10-25 22:21:25.780506847 +0000 UTC Remote: 2023-10-25 22:21:25.711162051 +0000 UTC m=+20.753266297 (delta=69.344796ms)
	I1025 22:21:25.828929  138917 fix.go:190] guest clock delta is within tolerance: 69.344796ms
	I1025 22:21:25.828942  138917 start.go:83] releasing machines lock for "newest-cni-506800", held for 20.718517455s
	I1025 22:21:25.828968  138917 main.go:141] libmachine: (newest-cni-506800) Calling .DriverName
	I1025 22:21:25.829213  138917 main.go:141] libmachine: (newest-cni-506800) Calling .GetIP
	I1025 22:21:25.831850  138917 main.go:141] libmachine: (newest-cni-506800) DBG | domain newest-cni-506800 has defined MAC address 52:54:00:76:76:b5 in network mk-newest-cni-506800
	I1025 22:21:25.832196  138917 main.go:141] libmachine: (newest-cni-506800) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:76:b5", ip: ""} in network mk-newest-cni-506800: {Iface:virbr3 ExpiryTime:2023-10-25 23:19:49 +0000 UTC Type:0 Mac:52:54:00:76:76:b5 Iaid: IPaddr:192.168.61.115 Prefix:24 Hostname:newest-cni-506800 Clientid:01:52:54:00:76:76:b5}
	I1025 22:21:25.832252  138917 main.go:141] libmachine: (newest-cni-506800) DBG | domain newest-cni-506800 has defined IP address 192.168.61.115 and MAC address 52:54:00:76:76:b5 in network mk-newest-cni-506800
	I1025 22:21:25.832442  138917 main.go:141] libmachine: (newest-cni-506800) Calling .DriverName
	I1025 22:21:25.832980  138917 main.go:141] libmachine: (newest-cni-506800) Calling .DriverName
	I1025 22:21:25.833165  138917 main.go:141] libmachine: (newest-cni-506800) Calling .DriverName
	I1025 22:21:25.833267  138917 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 22:21:25.833305  138917 main.go:141] libmachine: (newest-cni-506800) Calling .GetSSHHostname
	I1025 22:21:25.833362  138917 ssh_runner.go:195] Run: cat /version.json
	I1025 22:21:25.833398  138917 main.go:141] libmachine: (newest-cni-506800) Calling .GetSSHHostname
	I1025 22:21:25.835794  138917 main.go:141] libmachine: (newest-cni-506800) DBG | domain newest-cni-506800 has defined MAC address 52:54:00:76:76:b5 in network mk-newest-cni-506800
	I1025 22:21:25.835866  138917 main.go:141] libmachine: (newest-cni-506800) DBG | domain newest-cni-506800 has defined MAC address 52:54:00:76:76:b5 in network mk-newest-cni-506800
	I1025 22:21:25.836197  138917 main.go:141] libmachine: (newest-cni-506800) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:76:b5", ip: ""} in network mk-newest-cni-506800: {Iface:virbr3 ExpiryTime:2023-10-25 23:19:49 +0000 UTC Type:0 Mac:52:54:00:76:76:b5 Iaid: IPaddr:192.168.61.115 Prefix:24 Hostname:newest-cni-506800 Clientid:01:52:54:00:76:76:b5}
	I1025 22:21:25.836246  138917 main.go:141] libmachine: (newest-cni-506800) DBG | domain newest-cni-506800 has defined IP address 192.168.61.115 and MAC address 52:54:00:76:76:b5 in network mk-newest-cni-506800
	I1025 22:21:25.836280  138917 main.go:141] libmachine: (newest-cni-506800) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:76:b5", ip: ""} in network mk-newest-cni-506800: {Iface:virbr3 ExpiryTime:2023-10-25 23:19:49 +0000 UTC Type:0 Mac:52:54:00:76:76:b5 Iaid: IPaddr:192.168.61.115 Prefix:24 Hostname:newest-cni-506800 Clientid:01:52:54:00:76:76:b5}
	I1025 22:21:25.836303  138917 main.go:141] libmachine: (newest-cni-506800) DBG | domain newest-cni-506800 has defined IP address 192.168.61.115 and MAC address 52:54:00:76:76:b5 in network mk-newest-cni-506800
	I1025 22:21:25.836416  138917 main.go:141] libmachine: (newest-cni-506800) Calling .GetSSHPort
	I1025 22:21:25.836516  138917 main.go:141] libmachine: (newest-cni-506800) Calling .GetSSHPort
	I1025 22:21:25.836611  138917 main.go:141] libmachine: (newest-cni-506800) Calling .GetSSHKeyPath
	I1025 22:21:25.836712  138917 main.go:141] libmachine: (newest-cni-506800) Calling .GetSSHKeyPath
	I1025 22:21:25.836761  138917 main.go:141] libmachine: (newest-cni-506800) Calling .GetSSHUsername
	I1025 22:21:25.836833  138917 main.go:141] libmachine: (newest-cni-506800) Calling .GetSSHUsername
	I1025 22:21:25.836981  138917 sshutil.go:53] new ssh client: &{IP:192.168.61.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17488-80960/.minikube/machines/newest-cni-506800/id_rsa Username:docker}
	I1025 22:21:25.836986  138917 sshutil.go:53] new ssh client: &{IP:192.168.61.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17488-80960/.minikube/machines/newest-cni-506800/id_rsa Username:docker}
	I1025 22:21:25.917752  138917 ssh_runner.go:195] Run: systemctl --version
	I1025 22:21:25.944946  138917 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 22:21:25.951133  138917 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 22:21:25.951216  138917 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 22:21:25.969971  138917 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1025 22:21:25.969998  138917 start.go:472] detecting cgroup driver to use...
	I1025 22:21:25.970126  138917 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 22:21:25.990062  138917 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1025 22:21:26.002709  138917 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1025 22:21:26.015081  138917 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1025 22:21:26.015151  138917 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1025 22:21:26.027264  138917 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1025 22:21:26.039207  138917 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1025 22:21:26.051051  138917 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1025 22:21:26.062905  138917 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 22:21:26.073547  138917 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1025 22:21:26.083762  138917 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 22:21:26.092876  138917 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 22:21:26.102128  138917 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 22:21:26.216399  138917 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1025 22:21:26.233308  138917 start.go:472] detecting cgroup driver to use...
	I1025 22:21:26.233397  138917 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1025 22:21:26.247879  138917 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 22:21:26.262743  138917 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 22:21:26.280349  138917 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 22:21:26.291811  138917 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1025 22:21:26.302552  138917 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1025 22:21:26.328215  138917 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1025 22:21:26.340621  138917 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 22:21:26.358024  138917 ssh_runner.go:195] Run: which cri-dockerd
	I1025 22:21:26.361913  138917 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1025 22:21:26.371651  138917 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1025 22:21:26.387352  138917 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1025 22:21:26.501351  138917 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1025 22:21:26.610263  138917 docker.go:555] configuring docker to use "cgroupfs" as cgroup driver...
	I1025 22:21:26.610369  138917 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1025 22:21:26.626978  138917 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 22:21:26.734042  138917 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1025 22:21:28.175604  138917 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.441502311s)
	I1025 22:21:28.175695  138917 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1025 22:21:28.273760  138917 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1025 22:21:28.391174  138917 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1025 22:21:28.496406  138917 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 22:21:28.600307  138917 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1025 22:21:28.616906  138917 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 22:21:28.727964  138917 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I1025 22:21:28.820970  138917 start.go:519] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1025 22:21:28.821067  138917 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1025 22:21:28.827444  138917 start.go:540] Will wait 60s for crictl version
	I1025 22:21:28.827524  138917 ssh_runner.go:195] Run: which crictl
	I1025 22:21:28.831679  138917 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1025 22:21:28.903698  138917 start.go:556] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.6
	RuntimeApiVersion:  v1
	I1025 22:21:28.903771  138917 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1025 22:21:28.932015  138917 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1025 22:21:28.961833  138917 out.go:204] * Preparing Kubernetes v1.28.3 on Docker 24.0.6 ...
	I1025 22:21:28.961885  138917 main.go:141] libmachine: (newest-cni-506800) Calling .GetIP
	I1025 22:21:28.965018  138917 main.go:141] libmachine: (newest-cni-506800) DBG | domain newest-cni-506800 has defined MAC address 52:54:00:76:76:b5 in network mk-newest-cni-506800
	I1025 22:21:28.965491  138917 main.go:141] libmachine: (newest-cni-506800) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:76:76:b5", ip: ""} in network mk-newest-cni-506800: {Iface:virbr3 ExpiryTime:2023-10-25 23:19:49 +0000 UTC Type:0 Mac:52:54:00:76:76:b5 Iaid: IPaddr:192.168.61.115 Prefix:24 Hostname:newest-cni-506800 Clientid:01:52:54:00:76:76:b5}
	I1025 22:21:28.965524  138917 main.go:141] libmachine: (newest-cni-506800) DBG | domain newest-cni-506800 has defined IP address 192.168.61.115 and MAC address 52:54:00:76:76:b5 in network mk-newest-cni-506800
	I1025 22:21:28.965730  138917 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1025 22:21:28.970135  138917 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 22:21:28.983941  138917 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1025 22:21:28.985511  138917 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
	I1025 22:21:28.985564  138917 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1025 22:21:29.006546  138917 docker.go:693] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/gvisor-addon:2
	registry.k8s.io/kube-apiserver:v1.28.3
	registry.k8s.io/kube-scheduler:v1.28.3
	registry.k8s.io/kube-controller-manager:v1.28.3
	registry.k8s.io/kube-proxy:v1.28.3
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1025 22:21:29.006576  138917 docker.go:623] Images already preloaded, skipping extraction
	I1025 22:21:29.006623  138917 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1025 22:21:29.026197  138917 docker.go:693] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/gvisor-addon:2
	registry.k8s.io/kube-apiserver:v1.28.3
	registry.k8s.io/kube-scheduler:v1.28.3
	registry.k8s.io/kube-controller-manager:v1.28.3
	registry.k8s.io/kube-proxy:v1.28.3
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1025 22:21:29.026233  138917 cache_images.go:84] Images are preloaded, skipping loading
	I1025 22:21:29.026304  138917 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1025 22:21:29.052412  138917 cni.go:84] Creating CNI manager for ""
	I1025 22:21:29.052450  138917 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1025 22:21:29.052479  138917 kubeadm.go:87] Using pod CIDR: 10.42.0.0/16
	I1025 22:21:29.052510  138917 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.61.115 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-506800 NodeName:newest-cni-506800 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.115"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureArgs:
map[] NodeIP:192.168.61.115 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 22:21:29.052706  138917 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.115
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "newest-cni-506800"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.115
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.115"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 22:21:29.052793  138917 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --feature-gates=ServerSideApply=true --hostname-override=newest-cni-506800 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.115
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:newest-cni-506800 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1025 22:21:29.052862  138917 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1025 22:21:29.063078  138917 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 22:21:29.063171  138917 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 22:21:29.072686  138917 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (417 bytes)
	I1025 22:21:29.088396  138917 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 22:21:29.104212  138917 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2230 bytes)
	I1025 22:21:29.121029  138917 ssh_runner.go:195] Run: grep 192.168.61.115	control-plane.minikube.internal$ /etc/hosts
	I1025 22:21:29.124709  138917 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.115	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 22:21:29.135894  138917 certs.go:56] Setting up /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/newest-cni-506800 for IP: 192.168.61.115
	I1025 22:21:29.135945  138917 certs.go:190] acquiring lock for shared ca certs: {Name:mk95bc4bbfee71bbd045d1866d072591cdac4e29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 22:21:29.136096  138917 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17488-80960/.minikube/ca.key
	I1025 22:21:29.136164  138917 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17488-80960/.minikube/proxy-client-ca.key
	I1025 22:21:29.136260  138917 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/newest-cni-506800/client.key
	I1025 22:21:29.136337  138917 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/newest-cni-506800/apiserver.key.ccee7f4a
	I1025 22:21:29.136422  138917 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/newest-cni-506800/proxy-client.key
	I1025 22:21:29.136539  138917 certs.go:437] found cert: /home/jenkins/minikube-integration/17488-80960/.minikube/certs/home/jenkins/minikube-integration/17488-80960/.minikube/certs/88244.pem (1338 bytes)
	W1025 22:21:29.136572  138917 certs.go:433] ignoring /home/jenkins/minikube-integration/17488-80960/.minikube/certs/home/jenkins/minikube-integration/17488-80960/.minikube/certs/88244_empty.pem, impossibly tiny 0 bytes
	I1025 22:21:29.136583  138917 certs.go:437] found cert: /home/jenkins/minikube-integration/17488-80960/.minikube/certs/home/jenkins/minikube-integration/17488-80960/.minikube/certs/ca-key.pem (1679 bytes)
	I1025 22:21:29.136611  138917 certs.go:437] found cert: /home/jenkins/minikube-integration/17488-80960/.minikube/certs/home/jenkins/minikube-integration/17488-80960/.minikube/certs/ca.pem (1082 bytes)
	I1025 22:21:29.136636  138917 certs.go:437] found cert: /home/jenkins/minikube-integration/17488-80960/.minikube/certs/home/jenkins/minikube-integration/17488-80960/.minikube/certs/cert.pem (1123 bytes)
	I1025 22:21:29.136667  138917 certs.go:437] found cert: /home/jenkins/minikube-integration/17488-80960/.minikube/certs/home/jenkins/minikube-integration/17488-80960/.minikube/certs/key.pem (1679 bytes)
	I1025 22:21:29.136710  138917 certs.go:437] found cert: /home/jenkins/minikube-integration/17488-80960/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17488-80960/.minikube/files/etc/ssl/certs/882442.pem (1708 bytes)
	I1025 22:21:29.137375  138917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/newest-cni-506800/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1025 22:21:29.159894  138917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/newest-cni-506800/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1025 22:21:29.183732  138917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/newest-cni-506800/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 22:21:29.205740  138917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/newest-cni-506800/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1025 22:21:29.227477  138917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-80960/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 22:21:29.249885  138917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-80960/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1025 22:21:29.272267  138917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-80960/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 22:21:29.294892  138917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-80960/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1025 22:21:29.318338  138917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-80960/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 22:21:29.342357  138917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-80960/.minikube/certs/88244.pem --> /usr/share/ca-certificates/88244.pem (1338 bytes)
	I1025 22:21:29.365058  138917 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17488-80960/.minikube/files/etc/ssl/certs/882442.pem --> /usr/share/ca-certificates/882442.pem (1708 bytes)
	I1025 22:21:29.387840  138917 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 22:21:29.404261  138917 ssh_runner.go:195] Run: openssl version
	I1025 22:21:29.409923  138917 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/882442.pem && ln -fs /usr/share/ca-certificates/882442.pem /etc/ssl/certs/882442.pem"
	I1025 22:21:29.420393  138917 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/882442.pem
	I1025 22:21:29.424998  138917 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 25 21:19 /usr/share/ca-certificates/882442.pem
	I1025 22:21:29.425051  138917 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/882442.pem
	I1025 22:21:29.430364  138917 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/882442.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 22:21:29.441652  138917 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 22:21:29.452368  138917 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 22:21:29.457010  138917 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 25 21:13 /usr/share/ca-certificates/minikubeCA.pem
	I1025 22:21:29.457056  138917 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 22:21:29.462703  138917 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 22:21:29.473144  138917 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/88244.pem && ln -fs /usr/share/ca-certificates/88244.pem /etc/ssl/certs/88244.pem"
	I1025 22:21:29.483914  138917 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/88244.pem
	I1025 22:21:29.488420  138917 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 25 21:19 /usr/share/ca-certificates/88244.pem
	I1025 22:21:29.488468  138917 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/88244.pem
	I1025 22:21:29.494278  138917 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/88244.pem /etc/ssl/certs/51391683.0"
	I1025 22:21:29.504993  138917 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1025 22:21:29.509373  138917 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1025 22:21:29.515014  138917 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1025 22:21:29.520553  138917 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1025 22:21:29.526066  138917 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1025 22:21:29.531749  138917 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1025 22:21:29.537199  138917 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1025 22:21:29.543276  138917 kubeadm.go:404] StartCluster: {Name:newest-cni-506800 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.3 ClusterName:newest-cni-506800 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.115 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods
:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1025 22:21:29.543384  138917 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1025 22:21:29.562908  138917 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 22:21:29.573437  138917 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1025 22:21:29.573457  138917 kubeadm.go:636] restartCluster start
	I1025 22:21:29.573498  138917 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1025 22:21:29.583061  138917 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1025 22:21:29.583634  138917 kubeconfig.go:135] verify returned: extract IP: "newest-cni-506800" does not appear in /home/jenkins/minikube-integration/17488-80960/kubeconfig
	I1025 22:21:29.583841  138917 kubeconfig.go:146] "newest-cni-506800" context is missing from /home/jenkins/minikube-integration/17488-80960/kubeconfig - will repair!
	I1025 22:21:29.584374  138917 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17488-80960/kubeconfig: {Name:mk4723f12542c40c1c944f4b4dc7af3f0a23b0b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 22:21:29.585810  138917 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1025 22:21:29.595289  138917 api_server.go:166] Checking apiserver status ...
	I1025 22:21:29.595341  138917 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 22:21:29.606945  138917 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1025 22:21:29.606959  138917 api_server.go:166] Checking apiserver status ...
	I1025 22:21:29.606989  138917 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1025 22:21:29.617965  138917 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	
	* 
	* ==> Docker <==
	* -- Journal begins at Wed 2023-10-25 22:13:58 UTC, ends at Wed 2023-10-25 22:21:31 UTC. --
	Oct 25 22:20:31 old-k8s-version-820759 dockerd[1092]: time="2023-10-25T22:20:31.036876586Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 25 22:20:31 old-k8s-version-820759 dockerd[1092]: time="2023-10-25T22:20:31.472623320Z" level=info msg="shim disconnected" id=167a936628f88319d125dd92439e1870be7d9c61f266e1f26642d0e8ced4ecb7 namespace=moby
	Oct 25 22:20:31 old-k8s-version-820759 dockerd[1092]: time="2023-10-25T22:20:31.472710209Z" level=warning msg="cleaning up after shim disconnected" id=167a936628f88319d125dd92439e1870be7d9c61f266e1f26642d0e8ced4ecb7 namespace=moby
	Oct 25 22:20:31 old-k8s-version-820759 dockerd[1092]: time="2023-10-25T22:20:31.472722597Z" level=info msg="cleaning up dead shim" namespace=moby
	Oct 25 22:20:31 old-k8s-version-820759 dockerd[1086]: time="2023-10-25T22:20:31.473664946Z" level=info msg="ignoring event" container=167a936628f88319d125dd92439e1870be7d9c61f266e1f26642d0e8ced4ecb7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 25 22:20:49 old-k8s-version-820759 dockerd[1092]: time="2023-10-25T22:20:49.972872195Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 25 22:20:49 old-k8s-version-820759 dockerd[1092]: time="2023-10-25T22:20:49.972992227Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 25 22:20:49 old-k8s-version-820759 dockerd[1092]: time="2023-10-25T22:20:49.973019467Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 25 22:20:49 old-k8s-version-820759 dockerd[1092]: time="2023-10-25T22:20:49.973166041Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 25 22:20:50 old-k8s-version-820759 dockerd[1092]: time="2023-10-25T22:20:50.394801213Z" level=info msg="shim disconnected" id=cdeb213e451c4e585395bcc3c39e283b0a702e4e2fae401e708b58d9b58b285c namespace=moby
	Oct 25 22:20:50 old-k8s-version-820759 dockerd[1086]: time="2023-10-25T22:20:50.396022286Z" level=info msg="ignoring event" container=cdeb213e451c4e585395bcc3c39e283b0a702e4e2fae401e708b58d9b58b285c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 25 22:20:50 old-k8s-version-820759 dockerd[1092]: time="2023-10-25T22:20:50.396539155Z" level=warning msg="cleaning up after shim disconnected" id=cdeb213e451c4e585395bcc3c39e283b0a702e4e2fae401e708b58d9b58b285c namespace=moby
	Oct 25 22:20:50 old-k8s-version-820759 dockerd[1092]: time="2023-10-25T22:20:50.396556725Z" level=info msg="cleaning up dead shim" namespace=moby
	Oct 25 22:20:50 old-k8s-version-820759 dockerd[1092]: time="2023-10-25T22:20:50.421621707Z" level=warning msg="cleanup warnings time=\"2023-10-25T22:20:50Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Oct 25 22:20:55 old-k8s-version-820759 dockerd[1086]: time="2023-10-25T22:20:55.887526894Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Oct 25 22:20:55 old-k8s-version-820759 dockerd[1086]: time="2023-10-25T22:20:55.887929165Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Oct 25 22:20:55 old-k8s-version-820759 dockerd[1086]: time="2023-10-25T22:20:55.895563084Z" level=error msg="Handler for POST /images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Oct 25 22:21:21 old-k8s-version-820759 dockerd[1092]: time="2023-10-25T22:21:21.954687579Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 25 22:21:21 old-k8s-version-820759 dockerd[1092]: time="2023-10-25T22:21:21.954756931Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 25 22:21:21 old-k8s-version-820759 dockerd[1092]: time="2023-10-25T22:21:21.954774417Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 25 22:21:21 old-k8s-version-820759 dockerd[1092]: time="2023-10-25T22:21:21.954785997Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 25 22:21:22 old-k8s-version-820759 dockerd[1086]: time="2023-10-25T22:21:22.349620650Z" level=info msg="ignoring event" container=3bdf2e841b7482bc5414b48b617bc829cd8112906910d76318b5981b0d425588 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 25 22:21:22 old-k8s-version-820759 dockerd[1092]: time="2023-10-25T22:21:22.351695352Z" level=info msg="shim disconnected" id=3bdf2e841b7482bc5414b48b617bc829cd8112906910d76318b5981b0d425588 namespace=moby
	Oct 25 22:21:22 old-k8s-version-820759 dockerd[1092]: time="2023-10-25T22:21:22.352234590Z" level=warning msg="cleaning up after shim disconnected" id=3bdf2e841b7482bc5414b48b617bc829cd8112906910d76318b5981b0d425588 namespace=moby
	Oct 25 22:21:22 old-k8s-version-820759 dockerd[1092]: time="2023-10-25T22:21:22.352482033Z" level=info msg="cleaning up dead shim" namespace=moby
	
	* 
	* ==> container status <==
	* time="2023-10-25T22:21:31Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/dockershim.sock\": rpc error: code = Unimplemented desc = unknown service runtime.v1.RuntimeService"
	CONTAINER ID   IMAGE                    COMMAND                  CREATED              STATUS                     PORTS     NAMES
	3bdf2e841b74   a90209bb39e3             "nginx -g 'daemon of…"   10 seconds ago       Exited (1) 9 seconds ago             k8s_dashboard-metrics-scraper_dashboard-metrics-scraper-d6b4b5544-q6gzb_kubernetes-dashboard_1372fa52-9bad-48fb-92e7-c2f08f93b77f_3
	eb4bb8c9f61b   kubernetesui/dashboard   "/dashboard --insecu…"   About a minute ago   Up About a minute                    k8s_kubernetes-dashboard_kubernetes-dashboard-84b68f675b-tcrrw_kubernetes-dashboard_9b676aa0-0a76-41e2-9a1b-b7fede1b4713_0
	18b6f433220b   k8s.gcr.io/pause:3.1     "/pause"                 About a minute ago   Up About a minute                    k8s_POD_kubernetes-dashboard-84b68f675b-tcrrw_kubernetes-dashboard_9b676aa0-0a76-41e2-9a1b-b7fede1b4713_0
	f26655316667   k8s.gcr.io/pause:3.1     "/pause"                 About a minute ago   Up About a minute                    k8s_POD_dashboard-metrics-scraper-d6b4b5544-q6gzb_kubernetes-dashboard_1372fa52-9bad-48fb-92e7-c2f08f93b77f_0
	85f617213a56   k8s.gcr.io/pause:3.1     "/pause"                 About a minute ago   Up About a minute                    k8s_POD_metrics-server-74d5856cc6-c7s5p_kube-system_1b4df6e5-c51d-42bf-bff1-a4271ca59446_0
	f67353480c19   6e38f40d628d             "/storage-provisioner"   About a minute ago   Up About a minute                    k8s_storage-provisioner_storage-provisioner_kube-system_f3b29f55-a4dd-4ca9-beca-adce87e76f8a_0
	1e73ad53b1b6   k8s.gcr.io/pause:3.1     "/pause"                 About a minute ago   Up About a minute                    k8s_POD_storage-provisioner_kube-system_f3b29f55-a4dd-4ca9-beca-adce87e76f8a_0
	e9a3eb9309e7   bf261d157914             "/coredns -conf /etc…"   About a minute ago   Up About a minute                    k8s_coredns_coredns-5644d7b6d9-d26x9_kube-system_378324a4-b86d-4873-a9b1-5d4a7f15843f_0
	26365c8956dd   c21b0c7400f9             "/usr/local/bin/kube…"   About a minute ago   Up About a minute                    k8s_kube-proxy_kube-proxy-7dhp5_kube-system_1b896127-891f-4968-991c-446dffbdc667_0
	2a313bddb668   k8s.gcr.io/pause:3.1     "/pause"                 About a minute ago   Up About a minute                    k8s_POD_coredns-5644d7b6d9-d26x9_kube-system_378324a4-b86d-4873-a9b1-5d4a7f15843f_0
	f755a3d09fe4   k8s.gcr.io/pause:3.1     "/pause"                 About a minute ago   Up About a minute                    k8s_POD_kube-proxy-7dhp5_kube-system_1b896127-891f-4968-991c-446dffbdc667_0
	5df5fb5966ed   b2756210eeab             "etcd --advertise-cl…"   About a minute ago   Up About a minute                    k8s_etcd_etcd-old-k8s-version-820759_kube-system_74518b503a6cb77ebd7a4c88db6af062_0
	066a01cff727   301ddc62b80b             "kube-scheduler --au…"   About a minute ago   Up About a minute                    k8s_kube-scheduler_kube-scheduler-old-k8s-version-820759_kube-system_b3d303074fe0ca1d42a8bd9ed248df09_0
	c5aba14c59c1   06a629a7e51c             "kube-controller-man…"   About a minute ago   Up About a minute                    k8s_kube-controller-manager_kube-controller-manager-old-k8s-version-820759_kube-system_7376ddb4f190a0ded9394063437bcb4e_0
	ae766ce5c3dc   b305571ca60a             "kube-apiserver --ad…"   About a minute ago   Up About a minute                    k8s_kube-apiserver_kube-apiserver-old-k8s-version-820759_kube-system_5128e9815f36ba98e5f52e339507aae9_0
	09b5e341dbc6   k8s.gcr.io/pause:3.1     "/pause"                 About a minute ago   Up About a minute                    k8s_POD_etcd-old-k8s-version-820759_kube-system_74518b503a6cb77ebd7a4c88db6af062_0
	43c63630eda9   k8s.gcr.io/pause:3.1     "/pause"                 About a minute ago   Up About a minute                    k8s_POD_kube-scheduler-old-k8s-version-820759_kube-system_b3d303074fe0ca1d42a8bd9ed248df09_0
	f58448d90bcd   k8s.gcr.io/pause:3.1     "/pause"                 About a minute ago   Up About a minute                    k8s_POD_kube-controller-manager-old-k8s-version-820759_kube-system_7376ddb4f190a0ded9394063437bcb4e_0
	5f60a7838535   k8s.gcr.io/pause:3.1     "/pause"                 About a minute ago   Up About a minute                    k8s_POD_kube-apiserver-old-k8s-version-820759_kube-system_5128e9815f36ba98e5f52e339507aae9_0
	
	* 
	* ==> coredns [e9a3eb9309e7] <==
	* .:53
	2023-10-25T22:20:07.184Z [INFO] plugin/reload: Running configuration MD5 = f64cb9b977c7dfca58c4fab108535a76
	2023-10-25T22:20:07.184Z [INFO] CoreDNS-1.6.2
	2023-10-25T22:20:07.184Z [INFO] linux/amd64, go1.12.8, 795a3eb
	CoreDNS-1.6.2
	linux/amd64, go1.12.8, 795a3eb
	[INFO] Reloading
	2023-10-25T22:20:30.377Z [INFO] plugin/reload: Running configuration MD5 = 6485d707d03bc60ccfd5c7f4afc8c245
	[INFO] Reloading complete
	
	* 
	* ==> describe nodes <==
	* Name:               old-k8s-version-820759
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-820759
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=260f728c67096e5c74725dd26fc91a3a236708fc
	                    minikube.k8s.io/name=old-k8s-version-820759
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_10_25T22_19_49_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 25 Oct 2023 22:19:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 25 Oct 2023 22:20:44 +0000   Wed, 25 Oct 2023 22:19:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 25 Oct 2023 22:20:44 +0000   Wed, 25 Oct 2023 22:19:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 25 Oct 2023 22:20:44 +0000   Wed, 25 Oct 2023 22:19:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 25 Oct 2023 22:20:44 +0000   Wed, 25 Oct 2023 22:19:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.107
	  Hostname:    old-k8s-version-820759
	Capacity:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	Allocatable:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	System Info:
	 Machine ID:                 c3096ef0f0b74e55b281b1513ea3772c
	 System UUID:                c3096ef0-f0b7-4e55-b281-b1513ea3772c
	 Boot ID:                    891e8ade-17ea-4e64-9fa5-a9bdcf7b14de
	 Kernel Version:             5.10.57
	 OS Image:                   Buildroot 2021.02.12
	 Operating System:           linux
	 Architecture:               amd64
	 Container Runtime Version:  docker://24.0.6
	 Kubelet Version:            v1.16.0
	 Kube-Proxy Version:         v1.16.0
	PodCIDR:                     10.244.0.0/24
	PodCIDRs:                    10.244.0.0/24
	Non-terminated Pods:         (10 in total)
	  Namespace                  Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                  ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                coredns-5644d7b6d9-d26x9                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     87s
	  kube-system                etcd-old-k8s-version-820759                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27s
	  kube-system                kube-apiserver-old-k8s-version-820759             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         31s
	  kube-system                kube-controller-manager-old-k8s-version-820759    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         39s
	  kube-system                kube-proxy-7dhp5                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         87s
	  kube-system                kube-scheduler-old-k8s-version-820759             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         33s
	  kube-system                metrics-server-74d5856cc6-c7s5p                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         83s
	  kube-system                storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         84s
	  kubernetes-dashboard       dashboard-metrics-scraper-d6b4b5544-q6gzb         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         81s
	  kubernetes-dashboard       kubernetes-dashboard-84b68f675b-tcrrw             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         81s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                750m (37%!)(MISSING)   0 (0%!)(MISSING)
	  memory             270Mi (12%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From                                Message
	  ----    ------                   ----                 ----                                -------
	  Normal  NodeHasSufficientMemory  116s (x8 over 116s)  kubelet, old-k8s-version-820759     Node old-k8s-version-820759 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    116s (x8 over 116s)  kubelet, old-k8s-version-820759     Node old-k8s-version-820759 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     116s (x7 over 116s)  kubelet, old-k8s-version-820759     Node old-k8s-version-820759 status is now: NodeHasSufficientPID
	  Normal  Starting                 85s                  kube-proxy, old-k8s-version-820759  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.071387] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.330737] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.500650] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.147218] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.430969] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[Oct25 22:14] systemd-fstab-generator[516]: Ignoring "noauto" for root device
	[  +0.113681] systemd-fstab-generator[527]: Ignoring "noauto" for root device
	[  +1.319614] systemd-fstab-generator[796]: Ignoring "noauto" for root device
	[  +0.366243] systemd-fstab-generator[833]: Ignoring "noauto" for root device
	[  +0.120575] systemd-fstab-generator[844]: Ignoring "noauto" for root device
	[  +0.136420] systemd-fstab-generator[857]: Ignoring "noauto" for root device
	[  +6.197339] systemd-fstab-generator[1076]: Ignoring "noauto" for root device
	[  +3.546834] kauditd_printk_skb: 67 callbacks suppressed
	[ +14.750223] systemd-fstab-generator[1499]: Ignoring "noauto" for root device
	[  +0.589317] kauditd_printk_skb: 29 callbacks suppressed
	[  +0.165034] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[ +23.764766] kauditd_printk_skb: 5 callbacks suppressed
	[Oct25 22:19] systemd-fstab-generator[5529]: Ignoring "noauto" for root device
	[Oct25 22:20] kauditd_printk_skb: 2 callbacks suppressed
	[  +3.556194] hrtimer: interrupt took 3411452 ns
	
	* 
	* ==> etcd [5df5fb5966ed] <==
	* 2023-10-25 22:19:39.491806 I | raft: 43514042847e787e received MsgVoteResp from 43514042847e787e at term 2
	2023-10-25 22:19:39.491914 I | raft: 43514042847e787e became leader at term 2
	2023-10-25 22:19:39.492133 I | raft: raft.node: 43514042847e787e elected leader 43514042847e787e at term 2
	2023-10-25 22:19:39.492650 I | etcdserver: setting up the initial cluster version to 3.3
	2023-10-25 22:19:39.494499 N | etcdserver/membership: set the initial cluster version to 3.3
	2023-10-25 22:19:39.494569 I | etcdserver/api: enabled capabilities for version 3.3
	2023-10-25 22:19:39.494718 I | etcdserver: published {Name:old-k8s-version-820759 ClientURLs:[https://192.168.72.107:2379]} to cluster 18f3527479ebc6
	2023-10-25 22:19:39.494727 I | embed: ready to serve client requests
	2023-10-25 22:19:39.495649 I | embed: ready to serve client requests
	2023-10-25 22:19:39.497220 I | embed: serving client requests on 192.168.72.107:2379
	2023-10-25 22:19:39.497510 I | embed: serving client requests on 127.0.0.1:2379
	2023-10-25 22:19:45.202589 W | etcdserver: read-only range request "key:\"/registry/clusterrolebindings/\" range_end:\"/registry/clusterrolebindings0\" " with result "range_response_count:0 size:4" took too long (109.682218ms) to execute
	2023-10-25 22:19:45.202845 W | etcdserver: read-only range request "key:\"/registry/priorityclasses/system-cluster-critical\" " with result "range_response_count:0 size:4" took too long (110.258539ms) to execute
	2023-10-25 22:19:45.205733 W | etcdserver: read-only range request "key:\"/registry/namespaces/kube-system\" " with result "range_response_count:1 size:177" took too long (113.234738ms) to execute
	2023-10-25 22:20:01.267226 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/namespace-controller\" " with result "range_response_count:1 size:207" took too long (194.709551ms) to execute
	2023-10-25 22:20:01.518312 W | etcdserver: read-only range request "key:\"/registry/namespaces/kube-system\" " with result "range_response_count:1 size:178" took too long (182.584006ms) to execute
	2023-10-25 22:20:11.263858 W | etcdserver: request "header:<ID:8682530414369078922 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kubernetes-dashboard/kubernetes-dashboard-84b68f675b-tcrrw\" mod_revision:482 > success:<request_put:<key:\"/registry/pods/kubernetes-dashboard/kubernetes-dashboard-84b68f675b-tcrrw\" value_size:1784 >> failure:<request_range:<key:\"/registry/pods/kubernetes-dashboard/kubernetes-dashboard-84b68f675b-tcrrw\" > >>" with result "size:16" took too long (402.136422ms) to execute
	2023-10-25 22:20:11.682841 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/storage-provisioner\" " with result "range_response_count:1 size:2049" took too long (405.071477ms) to execute
	2023-10-25 22:20:11.683564 W | etcdserver: request "header:<ID:8682530414369078927 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/deployments/kubernetes-dashboard/kubernetes-dashboard\" mod_revision:486 > success:<request_put:<key:\"/registry/deployments/kubernetes-dashboard/kubernetes-dashboard\" value_size:2664 >> failure:<request_range:<key:\"/registry/deployments/kubernetes-dashboard/kubernetes-dashboard\" > >>" with result "size:16" took too long (165.278739ms) to execute
	2023-10-25 22:20:11.916439 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/storage-provisioner\" " with result "range_response_count:1 size:2049" took too long (227.425874ms) to execute
	2023-10-25 22:20:11.916790 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" " with result "range_response_count:1 size:752" took too long (332.062245ms) to execute
	2023-10-25 22:20:11.916955 W | etcdserver: read-only range request "key:\"/registry/minions/old-k8s-version-820759\" " with result "range_response_count:1 size:3041" took too long (638.638058ms) to execute
	2023-10-25 22:20:11.919968 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (639.91116ms) to execute
	2023-10-25 22:20:12.319331 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/coredns-5644d7b6d9-d26x9\" " with result "range_response_count:1 size:1890" took too long (107.023042ms) to execute
	2023-10-25 22:20:29.299689 W | etcdserver: read-only range request "key:\"/registry/events/kube-system/metrics-server-74d5856cc6-c7s5p.179179c25887e551\" " with result "range_response_count:1 size:494" took too long (120.390962ms) to execute
	
	* 
	* ==> kernel <==
	*  22:21:32 up 7 min,  0 users,  load average: 1.83, 1.00, 0.41
	Linux old-k8s-version-820759 5.10.57 #1 SMP Mon Oct 16 20:35:28 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [ae766ce5c3dc] <==
	* I1025 22:19:45.216394       1 storage_scheduling.go:139] created PriorityClass system-cluster-critical with value 2000000000
	I1025 22:19:45.216415       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I1025 22:19:46.948275       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1025 22:19:47.227795       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1025 22:19:47.498415       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
	W1025 22:19:47.603396       1 lease.go:222] Resetting endpoints for master service "kubernetes" to [192.168.72.107]
	I1025 22:19:47.604723       1 controller.go:606] quota admission added evaluator for: endpoints
	I1025 22:19:48.380126       1 controller.go:606] quota admission added evaluator for: serviceaccounts
	I1025 22:19:48.711810       1 controller.go:606] quota admission added evaluator for: deployments.apps
	I1025 22:19:49.013719       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
	I1025 22:20:04.172670       1 controller.go:606] quota admission added evaluator for: replicasets.apps
	I1025 22:20:04.197802       1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
	I1025 22:20:04.334778       1 controller.go:606] quota admission added evaluator for: events.events.k8s.io
	I1025 22:20:10.317779       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1025 22:20:10.318224       1 handler_proxy.go:99] no RequestInfo found in the context
	E1025 22:20:10.318463       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1025 22:20:10.318547       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1025 22:20:11.934376       1 trace.go:116] Trace[769389807]: "Get" url:/api/v1/nodes/old-k8s-version-820759 (started: 2023-10-25 22:20:11.273519798 +0000 UTC m=+34.682061259) (total time: 660.818811ms):
	Trace[769389807]: [660.142379ms] [660.11401ms] About to write a response
	I1025 22:21:10.319104       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1025 22:21:10.319237       1 handler_proxy.go:99] no RequestInfo found in the context
	E1025 22:21:10.319275       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1025 22:21:10.319283       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [c5aba14c59c1] <==
	* E1025 22:20:09.366277       1 replica_set.go:450] Sync "kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544" failed with pods "dashboard-metrics-scraper-d6b4b5544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1025 22:20:09.366956       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-d6b4b5544", UID:"29b18906-4307-4608-85b7-b5b0e192f274", APIVersion:"apps/v1", ResourceVersion:"433", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "dashboard-metrics-scraper-d6b4b5544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1025 22:20:09.371534       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-84b68f675b", UID:"ba15e4fd-617e-4009-bbb0-a6b47a9cbe4b", APIVersion:"apps/v1", ResourceVersion:"437", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-84b68f675b-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E1025 22:20:09.375850       1 replica_set.go:450] Sync "kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544" failed with pods "dashboard-metrics-scraper-d6b4b5544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1025 22:20:09.376366       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-d6b4b5544", UID:"29b18906-4307-4608-85b7-b5b0e192f274", APIVersion:"apps/v1", ResourceVersion:"433", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "dashboard-metrics-scraper-d6b4b5544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1025 22:20:09.383158       1 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard", UID:"cabe8cc8-b4cc-4d71-b434-d5eac58d2253", APIVersion:"apps/v1", ResourceVersion:"435", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set kubernetes-dashboard-84b68f675b to 1
	E1025 22:20:09.387876       1 replica_set.go:450] Sync "kubernetes-dashboard/kubernetes-dashboard-84b68f675b" failed with pods "kubernetes-dashboard-84b68f675b-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E1025 22:20:09.434342       1 replica_set.go:450] Sync "kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544" failed with pods "dashboard-metrics-scraper-d6b4b5544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1025 22:20:09.434641       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-d6b4b5544", UID:"29b18906-4307-4608-85b7-b5b0e192f274", APIVersion:"apps/v1", ResourceVersion:"433", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "dashboard-metrics-scraper-d6b4b5544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1025 22:20:09.463917       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-84b68f675b", UID:"ba15e4fd-617e-4009-bbb0-a6b47a9cbe4b", APIVersion:"apps/v1", ResourceVersion:"437", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-84b68f675b-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E1025 22:20:09.565725       1 replica_set.go:450] Sync "kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544" failed with pods "dashboard-metrics-scraper-d6b4b5544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1025 22:20:09.565815       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-d6b4b5544", UID:"29b18906-4307-4608-85b7-b5b0e192f274", APIVersion:"apps/v1", ResourceVersion:"433", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "dashboard-metrics-scraper-d6b4b5544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E1025 22:20:09.579724       1 replica_set.go:450] Sync "kubernetes-dashboard/kubernetes-dashboard-84b68f675b" failed with pods "kubernetes-dashboard-84b68f675b-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1025 22:20:09.589444       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-84b68f675b", UID:"ba15e4fd-617e-4009-bbb0-a6b47a9cbe4b", APIVersion:"apps/v1", ResourceVersion:"441", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-84b68f675b-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E1025 22:20:09.589603       1 replica_set.go:450] Sync "kubernetes-dashboard/kubernetes-dashboard-84b68f675b" failed with pods "kubernetes-dashboard-84b68f675b-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E1025 22:20:09.611838       1 replica_set.go:450] Sync "kubernetes-dashboard/kubernetes-dashboard-84b68f675b" failed with pods "kubernetes-dashboard-84b68f675b-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1025 22:20:09.612002       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-84b68f675b", UID:"ba15e4fd-617e-4009-bbb0-a6b47a9cbe4b", APIVersion:"apps/v1", ResourceVersion:"441", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-84b68f675b-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E1025 22:20:09.676623       1 replica_set.go:450] Sync "kubernetes-dashboard/kubernetes-dashboard-84b68f675b" failed with pods "kubernetes-dashboard-84b68f675b-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1025 22:20:09.677163       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-84b68f675b", UID:"ba15e4fd-617e-4009-bbb0-a6b47a9cbe4b", APIVersion:"apps/v1", ResourceVersion:"441", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-84b68f675b-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1025 22:20:10.743610       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-d6b4b5544", UID:"29b18906-4307-4608-85b7-b5b0e192f274", APIVersion:"apps/v1", ResourceVersion:"433", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: dashboard-metrics-scraper-d6b4b5544-q6gzb
	I1025 22:20:10.805529       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-84b68f675b", UID:"ba15e4fd-617e-4009-bbb0-a6b47a9cbe4b", APIVersion:"apps/v1", ResourceVersion:"441", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kubernetes-dashboard-84b68f675b-tcrrw
	E1025 22:20:34.683433       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1025 22:20:36.424318       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1025 22:21:04.935424       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1025 22:21:08.426628       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [26365c8956dd] <==
	* W1025 22:20:06.104641       1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
	I1025 22:20:06.134968       1 node.go:135] Successfully retrieved node IP: 192.168.72.107
	I1025 22:20:06.135099       1 server_others.go:149] Using iptables Proxier.
	I1025 22:20:06.136265       1 server.go:529] Version: v1.16.0
	I1025 22:20:06.144023       1 config.go:313] Starting service config controller
	I1025 22:20:06.144188       1 shared_informer.go:197] Waiting for caches to sync for service config
	I1025 22:20:06.144322       1 config.go:131] Starting endpoints config controller
	I1025 22:20:06.144340       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I1025 22:20:06.245439       1 shared_informer.go:204] Caches are synced for endpoints config 
	I1025 22:20:06.245763       1 shared_informer.go:204] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [066a01cff727] <==
	* W1025 22:19:44.204745       1 authentication.go:79] Authentication is disabled
	I1025 22:19:44.204847       1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
	I1025 22:19:44.206496       1 secure_serving.go:123] Serving securely on 127.0.0.1:10259
	E1025 22:19:44.280251       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1025 22:19:44.280766       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1025 22:19:44.281122       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1025 22:19:44.286499       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1025 22:19:44.286748       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1025 22:19:44.286966       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1025 22:19:44.287336       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1025 22:19:44.287459       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1025 22:19:44.287554       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1025 22:19:44.287729       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1025 22:19:44.288432       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1025 22:19:45.282736       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1025 22:19:45.285741       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1025 22:19:45.289460       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1025 22:19:45.291188       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1025 22:19:45.294181       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1025 22:19:45.296428       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1025 22:19:45.298643       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1025 22:19:45.299126       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1025 22:19:45.302288       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1025 22:19:45.302638       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1025 22:19:45.304404       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Wed 2023-10-25 22:13:58 UTC, ends at Wed 2023-10-25 22:21:32 UTC. --
	Oct 25 22:20:29 old-k8s-version-820759 kubelet[5535]: W1025 22:20:29.874527    5535 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544-q6gzb through plugin: invalid network status for
	Oct 25 22:20:30 old-k8s-version-820759 kubelet[5535]: W1025 22:20:30.918648    5535 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544-q6gzb through plugin: invalid network status for
	Oct 25 22:20:31 old-k8s-version-820759 kubelet[5535]: W1025 22:20:31.522862    5535 container.go:409] Failed to create summary reader for "/kubepods/besteffort/pod1372fa52-9bad-48fb-92e7-c2f08f93b77f/167a936628f88319d125dd92439e1870be7d9c61f266e1f26642d0e8ced4ecb7": none of the resources are being tracked.
	Oct 25 22:20:31 old-k8s-version-820759 kubelet[5535]: W1025 22:20:31.937373    5535 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544-q6gzb through plugin: invalid network status for
	Oct 25 22:20:31 old-k8s-version-820759 kubelet[5535]: E1025 22:20:31.952935    5535 pod_workers.go:191] Error syncing pod 1372fa52-9bad-48fb-92e7-c2f08f93b77f ("dashboard-metrics-scraper-d6b4b5544-q6gzb_kubernetes-dashboard(1372fa52-9bad-48fb-92e7-c2f08f93b77f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-d6b4b5544-q6gzb_kubernetes-dashboard(1372fa52-9bad-48fb-92e7-c2f08f93b77f)"
	Oct 25 22:20:32 old-k8s-version-820759 kubelet[5535]: W1025 22:20:32.959393    5535 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544-q6gzb through plugin: invalid network status for
	Oct 25 22:20:32 old-k8s-version-820759 kubelet[5535]: E1025 22:20:32.966313    5535 pod_workers.go:191] Error syncing pod 1372fa52-9bad-48fb-92e7-c2f08f93b77f ("dashboard-metrics-scraper-d6b4b5544-q6gzb_kubernetes-dashboard(1372fa52-9bad-48fb-92e7-c2f08f93b77f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-d6b4b5544-q6gzb_kubernetes-dashboard(1372fa52-9bad-48fb-92e7-c2f08f93b77f)"
	Oct 25 22:20:36 old-k8s-version-820759 kubelet[5535]: E1025 22:20:36.924731    5535 pod_workers.go:191] Error syncing pod 1372fa52-9bad-48fb-92e7-c2f08f93b77f ("dashboard-metrics-scraper-d6b4b5544-q6gzb_kubernetes-dashboard(1372fa52-9bad-48fb-92e7-c2f08f93b77f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-d6b4b5544-q6gzb_kubernetes-dashboard(1372fa52-9bad-48fb-92e7-c2f08f93b77f)"
	Oct 25 22:20:43 old-k8s-version-820759 kubelet[5535]: E1025 22:20:43.877991    5535 pod_workers.go:191] Error syncing pod 1b4df6e5-c51d-42bf-bff1-a4271ca59446 ("metrics-server-74d5856cc6-c7s5p_kube-system(1b4df6e5-c51d-42bf-bff1-a4271ca59446)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 25 22:20:50 old-k8s-version-820759 kubelet[5535]: W1025 22:20:50.125539    5535 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544-q6gzb through plugin: invalid network status for
	Oct 25 22:20:51 old-k8s-version-820759 kubelet[5535]: W1025 22:20:51.389757    5535 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544-q6gzb through plugin: invalid network status for
	Oct 25 22:20:51 old-k8s-version-820759 kubelet[5535]: E1025 22:20:51.398367    5535 pod_workers.go:191] Error syncing pod 1372fa52-9bad-48fb-92e7-c2f08f93b77f ("dashboard-metrics-scraper-d6b4b5544-q6gzb_kubernetes-dashboard(1372fa52-9bad-48fb-92e7-c2f08f93b77f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-d6b4b5544-q6gzb_kubernetes-dashboard(1372fa52-9bad-48fb-92e7-c2f08f93b77f)"
	Oct 25 22:20:52 old-k8s-version-820759 kubelet[5535]: W1025 22:20:52.408312    5535 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544-q6gzb through plugin: invalid network status for
	Oct 25 22:20:55 old-k8s-version-820759 kubelet[5535]: E1025 22:20:55.896022    5535 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host
	Oct 25 22:20:55 old-k8s-version-820759 kubelet[5535]: E1025 22:20:55.896159    5535 kuberuntime_image.go:50] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host
	Oct 25 22:20:55 old-k8s-version-820759 kubelet[5535]: E1025 22:20:55.896215    5535 kuberuntime_manager.go:783] container start failed: ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host
	Oct 25 22:20:55 old-k8s-version-820759 kubelet[5535]: E1025 22:20:55.896249    5535 pod_workers.go:191] Error syncing pod 1b4df6e5-c51d-42bf-bff1-a4271ca59446 ("metrics-server-74d5856cc6-c7s5p_kube-system(1b4df6e5-c51d-42bf-bff1-a4271ca59446)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Oct 25 22:20:56 old-k8s-version-820759 kubelet[5535]: E1025 22:20:56.924979    5535 pod_workers.go:191] Error syncing pod 1372fa52-9bad-48fb-92e7-c2f08f93b77f ("dashboard-metrics-scraper-d6b4b5544-q6gzb_kubernetes-dashboard(1372fa52-9bad-48fb-92e7-c2f08f93b77f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-d6b4b5544-q6gzb_kubernetes-dashboard(1372fa52-9bad-48fb-92e7-c2f08f93b77f)"
	Oct 25 22:21:08 old-k8s-version-820759 kubelet[5535]: E1025 22:21:08.874178    5535 pod_workers.go:191] Error syncing pod 1b4df6e5-c51d-42bf-bff1-a4271ca59446 ("metrics-server-74d5856cc6-c7s5p_kube-system(1b4df6e5-c51d-42bf-bff1-a4271ca59446)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 25 22:21:09 old-k8s-version-820759 kubelet[5535]: E1025 22:21:09.871482    5535 pod_workers.go:191] Error syncing pod 1372fa52-9bad-48fb-92e7-c2f08f93b77f ("dashboard-metrics-scraper-d6b4b5544-q6gzb_kubernetes-dashboard(1372fa52-9bad-48fb-92e7-c2f08f93b77f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-d6b4b5544-q6gzb_kubernetes-dashboard(1372fa52-9bad-48fb-92e7-c2f08f93b77f)"
	Oct 25 22:21:22 old-k8s-version-820759 kubelet[5535]: W1025 22:21:22.658501    5535 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544-q6gzb through plugin: invalid network status for
	Oct 25 22:21:22 old-k8s-version-820759 kubelet[5535]: E1025 22:21:22.665691    5535 pod_workers.go:191] Error syncing pod 1372fa52-9bad-48fb-92e7-c2f08f93b77f ("dashboard-metrics-scraper-d6b4b5544-q6gzb_kubernetes-dashboard(1372fa52-9bad-48fb-92e7-c2f08f93b77f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-d6b4b5544-q6gzb_kubernetes-dashboard(1372fa52-9bad-48fb-92e7-c2f08f93b77f)"
	Oct 25 22:21:22 old-k8s-version-820759 kubelet[5535]: E1025 22:21:22.884748    5535 pod_workers.go:191] Error syncing pod 1b4df6e5-c51d-42bf-bff1-a4271ca59446 ("metrics-server-74d5856cc6-c7s5p_kube-system(1b4df6e5-c51d-42bf-bff1-a4271ca59446)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 25 22:21:23 old-k8s-version-820759 kubelet[5535]: W1025 22:21:23.676983    5535 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544-q6gzb through plugin: invalid network status for
	Oct 25 22:21:26 old-k8s-version-820759 kubelet[5535]: E1025 22:21:26.924358    5535 pod_workers.go:191] Error syncing pod 1372fa52-9bad-48fb-92e7-c2f08f93b77f ("dashboard-metrics-scraper-d6b4b5544-q6gzb_kubernetes-dashboard(1372fa52-9bad-48fb-92e7-c2f08f93b77f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-d6b4b5544-q6gzb_kubernetes-dashboard(1372fa52-9bad-48fb-92e7-c2f08f93b77f)"
	
	* 
	* ==> kubernetes-dashboard [eb4bb8c9f61b] <==
	* 2023/10/25 22:20:23 Using namespace: kubernetes-dashboard
	2023/10/25 22:20:23 Using in-cluster config to connect to apiserver
	2023/10/25 22:20:23 Using secret token for csrf signing
	2023/10/25 22:20:23 Initializing csrf token from kubernetes-dashboard-csrf secret
	2023/10/25 22:20:23 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2023/10/25 22:20:23 Successful initial request to the apiserver, version: v1.16.0
	2023/10/25 22:20:23 Generating JWE encryption key
	2023/10/25 22:20:23 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2023/10/25 22:20:23 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2023/10/25 22:20:23 Initializing JWE encryption key from synchronized object
	2023/10/25 22:20:23 Creating in-cluster Sidecar client
	2023/10/25 22:20:23 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2023/10/25 22:20:23 Serving insecurely on HTTP port: 9090
	2023/10/25 22:20:53 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2023/10/25 22:21:23 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2023/10/25 22:20:23 Starting overwatch
	
	* 
	* ==> storage-provisioner [f67353480c19] <==
	* I1025 22:20:08.935214       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1025 22:20:09.103186       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1025 22:20:09.105822       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1025 22:20:09.422695       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1025 22:20:09.428392       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-820759_4a917bc3-d152-4947-bb55-7442484a1034!
	I1025 22:20:09.463393       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6e8d813e-bcc5-4a84-b8d5-fe2a45db686b", APIVersion:"v1", ResourceVersion:"439", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-820759_4a917bc3-d152-4947-bb55-7442484a1034 became leader
	I1025 22:20:09.631719       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-820759_4a917bc3-d152-4947-bb55-7442484a1034!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-820759 -n old-k8s-version-820759
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-820759 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-74d5856cc6-c7s5p
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-820759 describe pod metrics-server-74d5856cc6-c7s5p
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-820759 describe pod metrics-server-74d5856cc6-c7s5p: exit status 1 (63.282506ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-74d5856cc6-c7s5p" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-820759 describe pod metrics-server-74d5856cc6-c7s5p: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (1.97s)

                                                
                                    

Test pass (288/321)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 70.41
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.08
10 TestDownloadOnly/v1.28.3/json-events 22.69
11 TestDownloadOnly/v1.28.3/preload-exists 0
15 TestDownloadOnly/v1.28.3/LogsDuration 0.07
16 TestDownloadOnly/DeleteAll 0.15
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.14
19 TestBinaryMirror 0.58
20 TestOffline 112.35
23 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
24 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
25 TestAddons/Setup 222.38
27 TestAddons/parallel/Registry 24.63
28 TestAddons/parallel/Ingress 25.65
29 TestAddons/parallel/InspektorGadget 11.25
30 TestAddons/parallel/MetricsServer 5.98
31 TestAddons/parallel/HelmTiller 19.05
33 TestAddons/parallel/CSI 56.92
34 TestAddons/parallel/Headlamp 18.46
35 TestAddons/parallel/CloudSpanner 5.59
36 TestAddons/parallel/LocalPath 23.45
37 TestAddons/parallel/NvidiaDevicePlugin 5.57
40 TestAddons/serial/GCPAuth/Namespaces 0.15
41 TestAddons/StoppedEnableDisable 13.43
42 TestCertOptions 89.28
43 TestCertExpiration 312.2
44 TestDockerFlags 79.43
45 TestForceSystemdFlag 56.19
46 TestForceSystemdEnv 56.78
48 TestKVMDriverInstallOrUpdate 6.94
52 TestErrorSpam/setup 51.45
53 TestErrorSpam/start 0.4
54 TestErrorSpam/status 0.85
55 TestErrorSpam/pause 1.24
56 TestErrorSpam/unpause 1.41
57 TestErrorSpam/stop 13.29
60 TestFunctional/serial/CopySyncFile 0
61 TestFunctional/serial/StartWithProxy 68.32
62 TestFunctional/serial/AuditLog 0
63 TestFunctional/serial/SoftStart 38.97
64 TestFunctional/serial/KubeContext 0.05
65 TestFunctional/serial/KubectlGetPods 0.08
68 TestFunctional/serial/CacheCmd/cache/add_remote 2.78
69 TestFunctional/serial/CacheCmd/cache/add_local 2.12
70 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
71 TestFunctional/serial/CacheCmd/cache/list 0.06
72 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.25
73 TestFunctional/serial/CacheCmd/cache/cache_reload 1.32
74 TestFunctional/serial/CacheCmd/cache/delete 0.13
75 TestFunctional/serial/MinikubeKubectlCmd 0.13
76 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.13
77 TestFunctional/serial/ExtraConfig 40.81
78 TestFunctional/serial/ComponentHealth 0.07
79 TestFunctional/serial/LogsCmd 1.12
80 TestFunctional/serial/LogsFileCmd 1.15
81 TestFunctional/serial/InvalidService 5.18
83 TestFunctional/parallel/ConfigCmd 0.44
84 TestFunctional/parallel/DashboardCmd 46.21
85 TestFunctional/parallel/DryRun 0.29
86 TestFunctional/parallel/InternationalLanguage 0.15
87 TestFunctional/parallel/StatusCmd 0.88
91 TestFunctional/parallel/ServiceCmdConnect 11.56
92 TestFunctional/parallel/AddonsCmd 0.15
93 TestFunctional/parallel/PersistentVolumeClaim 66.64
95 TestFunctional/parallel/SSHCmd 0.41
96 TestFunctional/parallel/CpCmd 0.94
97 TestFunctional/parallel/MySQL 40.22
98 TestFunctional/parallel/FileSync 0.23
99 TestFunctional/parallel/CertSync 1.57
103 TestFunctional/parallel/NodeLabels 0.08
105 TestFunctional/parallel/NonActiveRuntimeDisabled 0.24
107 TestFunctional/parallel/License 0.88
108 TestFunctional/parallel/Version/short 0.06
109 TestFunctional/parallel/Version/components 0.66
110 TestFunctional/parallel/ServiceCmd/DeployApp 13.23
111 TestFunctional/parallel/ImageCommands/ImageListShort 0.29
112 TestFunctional/parallel/ImageCommands/ImageListTable 0.29
113 TestFunctional/parallel/ImageCommands/ImageListJson 0.27
114 TestFunctional/parallel/ImageCommands/ImageListYaml 0.27
115 TestFunctional/parallel/ImageCommands/ImageBuild 5.07
116 TestFunctional/parallel/ImageCommands/Setup 3.4
126 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.26
127 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.46
128 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 8.25
129 TestFunctional/parallel/ServiceCmd/List 0.45
130 TestFunctional/parallel/ServiceCmd/JSONOutput 0.36
131 TestFunctional/parallel/ServiceCmd/HTTPS 0.38
132 TestFunctional/parallel/DockerEnv/bash 1.01
133 TestFunctional/parallel/ServiceCmd/Format 0.38
134 TestFunctional/parallel/ServiceCmd/URL 0.38
135 TestFunctional/parallel/UpdateContextCmd/no_changes 0.11
136 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.11
137 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.11
138 TestFunctional/parallel/ProfileCmd/profile_not_create 0.34
139 TestFunctional/parallel/ProfileCmd/profile_list 0.35
140 TestFunctional/parallel/ProfileCmd/profile_json_output 0.37
141 TestFunctional/parallel/MountCmd/any-port 33.82
142 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.47
143 TestFunctional/parallel/ImageCommands/ImageRemove 0.54
144 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 2.2
145 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.34
146 TestFunctional/parallel/MountCmd/specific-port 1.59
147 TestFunctional/parallel/MountCmd/VerifyCleanup 1.69
148 TestFunctional/delete_addon-resizer_images 0.07
149 TestFunctional/delete_my-image_image 0.01
150 TestFunctional/delete_minikube_cached_images 0.02
151 TestGvisorAddon 348.03
154 TestImageBuild/serial/Setup 52.44
155 TestImageBuild/serial/NormalBuild 3.43
156 TestImageBuild/serial/BuildWithBuildArg 1.29
157 TestImageBuild/serial/BuildWithDockerIgnore 0.39
158 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.29
161 TestIngressAddonLegacy/StartLegacyK8sCluster 142.24
163 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 17.56
164 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.56
165 TestIngressAddonLegacy/serial/ValidateIngressAddons 41.57
168 TestJSONOutput/start/Command 69.87
169 TestJSONOutput/start/Audit 0
171 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
172 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
174 TestJSONOutput/pause/Command 0.58
175 TestJSONOutput/pause/Audit 0
177 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
178 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
180 TestJSONOutput/unpause/Command 0.53
181 TestJSONOutput/unpause/Audit 0
183 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
184 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
186 TestJSONOutput/stop/Command 13.12
187 TestJSONOutput/stop/Audit 0
189 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
190 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
191 TestErrorJSONOutput 0.22
196 TestMainNoArgs 0.06
197 TestMinikubeProfile 105.55
200 TestMountStart/serial/StartWithMountFirst 34.74
201 TestMountStart/serial/VerifyMountFirst 0.41
202 TestMountStart/serial/StartWithMountSecond 36.05
203 TestMountStart/serial/VerifyMountSecond 0.4
204 TestMountStart/serial/DeleteFirst 0.67
205 TestMountStart/serial/VerifyMountPostDelete 0.4
206 TestMountStart/serial/Stop 2.11
207 TestMountStart/serial/RestartStopped 27.87
208 TestMountStart/serial/VerifyMountPostStop 0.41
211 TestMultiNode/serial/FreshStart2Nodes 214.28
212 TestMultiNode/serial/DeployApp2Nodes 6.47
213 TestMultiNode/serial/PingHostFrom2Pods 0.95
214 TestMultiNode/serial/AddNode 51.37
215 TestMultiNode/serial/ProfileList 0.22
216 TestMultiNode/serial/CopyFile 7.66
217 TestMultiNode/serial/StopNode 4.02
218 TestMultiNode/serial/StartAfterStop 32.23
219 TestMultiNode/serial/RestartKeepsNodes 183.37
220 TestMultiNode/serial/DeleteNode 1.79
221 TestMultiNode/serial/StopMultiNode 25.62
222 TestMultiNode/serial/RestartMultiNode 116.8
223 TestMultiNode/serial/ValidateNameConflict 53.45
228 TestPreload 260.54
230 TestScheduledStopUnix 123.16
231 TestSkaffold 149.87
234 TestRunningBinaryUpgrade 292.27
236 TestKubernetesUpgrade 237.14
249 TestStoppedBinaryUpgrade/Setup 2.19
252 TestPause/serial/Start 72.85
261 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
262 TestNoKubernetes/serial/StartWithK8s 77.86
263 TestPause/serial/SecondStartNoReconfiguration 39.71
264 TestNoKubernetes/serial/StartWithStopK8s 17.5
265 TestNoKubernetes/serial/Start 29.19
266 TestPause/serial/Pause 0.59
267 TestPause/serial/VerifyStatus 0.27
268 TestPause/serial/Unpause 0.56
269 TestPause/serial/PauseAgain 0.74
270 TestPause/serial/DeletePaused 1.1
271 TestPause/serial/VerifyDeletedResources 129.61
272 TestNoKubernetes/serial/VerifyK8sNotRunning 0.21
273 TestNoKubernetes/serial/ProfileList 39.55
274 TestNoKubernetes/serial/Stop 2.11
275 TestNoKubernetes/serial/StartNoArgs 38.8
276 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.21
277 TestNetworkPlugins/group/auto/Start 105.06
278 TestNetworkPlugins/group/kindnet/Start 84.21
279 TestNetworkPlugins/group/auto/KubeletFlags 0.24
280 TestNetworkPlugins/group/auto/NetCatPod 12.45
281 TestNetworkPlugins/group/kindnet/ControllerPod 5.02
282 TestNetworkPlugins/group/auto/DNS 0.22
283 TestNetworkPlugins/group/auto/Localhost 0.19
284 TestNetworkPlugins/group/auto/HairPin 0.17
285 TestNetworkPlugins/group/kindnet/KubeletFlags 0.23
286 TestNetworkPlugins/group/kindnet/NetCatPod 11.38
287 TestNetworkPlugins/group/kindnet/DNS 0.23
288 TestNetworkPlugins/group/kindnet/Localhost 0.18
289 TestNetworkPlugins/group/kindnet/HairPin 0.17
290 TestNetworkPlugins/group/calico/Start 115.71
291 TestNetworkPlugins/group/custom-flannel/Start 101.77
292 TestNetworkPlugins/group/false/Start 77.87
293 TestNetworkPlugins/group/calico/ControllerPod 5.03
294 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.22
295 TestNetworkPlugins/group/custom-flannel/NetCatPod 12.36
296 TestNetworkPlugins/group/calico/KubeletFlags 0.25
297 TestNetworkPlugins/group/calico/NetCatPod 14.41
298 TestNetworkPlugins/group/custom-flannel/DNS 0.18
299 TestNetworkPlugins/group/custom-flannel/Localhost 0.47
300 TestNetworkPlugins/group/custom-flannel/HairPin 0.22
301 TestNetworkPlugins/group/calico/DNS 0.26
302 TestNetworkPlugins/group/calico/Localhost 0.18
303 TestNetworkPlugins/group/calico/HairPin 0.21
304 TestNetworkPlugins/group/flannel/Start 92.07
305 TestNetworkPlugins/group/bridge/Start 111.71
306 TestNetworkPlugins/group/false/KubeletFlags 0.23
307 TestNetworkPlugins/group/false/NetCatPod 12.36
308 TestNetworkPlugins/group/false/DNS 0.18
309 TestNetworkPlugins/group/false/Localhost 0.15
310 TestNetworkPlugins/group/false/HairPin 0.14
311 TestNetworkPlugins/group/enable-default-cni/Start 101.28
312 TestNetworkPlugins/group/flannel/ControllerPod 5.03
313 TestNetworkPlugins/group/flannel/KubeletFlags 0.23
314 TestNetworkPlugins/group/flannel/NetCatPod 13.37
315 TestNetworkPlugins/group/flannel/DNS 0.19
316 TestNetworkPlugins/group/flannel/Localhost 0.17
317 TestNetworkPlugins/group/flannel/HairPin 0.16
318 TestNetworkPlugins/group/bridge/KubeletFlags 0.25
319 TestNetworkPlugins/group/bridge/NetCatPod 12.5
320 TestNetworkPlugins/group/kubenet/Start 77.86
321 TestNetworkPlugins/group/bridge/DNS 0.2
322 TestNetworkPlugins/group/bridge/Localhost 0.18
323 TestNetworkPlugins/group/bridge/HairPin 0.16
325 TestStartStop/group/old-k8s-version/serial/FirstStart 164.02
326 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.22
327 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.36
328 TestNetworkPlugins/group/enable-default-cni/DNS 0.23
329 TestNetworkPlugins/group/enable-default-cni/Localhost 0.23
330 TestNetworkPlugins/group/enable-default-cni/HairPin 0.28
331 TestStoppedBinaryUpgrade/MinikubeLogs 1.46
333 TestStartStop/group/no-preload/serial/FirstStart 142.66
335 TestStartStop/group/embed-certs/serial/FirstStart 111.67
336 TestNetworkPlugins/group/kubenet/KubeletFlags 0.24
337 TestNetworkPlugins/group/kubenet/NetCatPod 12.4
338 TestNetworkPlugins/group/kubenet/DNS 0.23
339 TestNetworkPlugins/group/kubenet/Localhost 0.18
340 TestNetworkPlugins/group/kubenet/HairPin 0.2
342 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 84
343 TestStartStop/group/embed-certs/serial/DeployApp 10.49
344 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.17
345 TestStartStop/group/embed-certs/serial/Stop 13.14
346 TestStartStop/group/old-k8s-version/serial/DeployApp 11.52
347 TestStartStop/group/no-preload/serial/DeployApp 11.53
348 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.28
349 TestStartStop/group/embed-certs/serial/SecondStart 331.98
350 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 12.53
351 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.06
352 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.48
353 TestStartStop/group/old-k8s-version/serial/Stop 13.37
354 TestStartStop/group/no-preload/serial/Stop 13.55
355 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.15
356 TestStartStop/group/default-k8s-diff-port/serial/Stop 13.14
357 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.3
358 TestStartStop/group/old-k8s-version/serial/SecondStart 454.93
359 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.23
360 TestStartStop/group/no-preload/serial/SecondStart 353.6
361 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.22
362 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 375.6
363 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 24.02
364 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.09
365 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.29
366 TestStartStop/group/embed-certs/serial/Pause 2.65
368 TestStartStop/group/newest-cni/serial/FirstStart 80.71
369 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 24.04
370 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.14
371 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.4
372 TestStartStop/group/no-preload/serial/Pause 4.36
373 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 18.02
374 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.09
375 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.29
376 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.64
377 TestStartStop/group/newest-cni/serial/DeployApp 0
378 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.11
379 TestStartStop/group/newest-cni/serial/Stop 13.13
380 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.22
381 TestStartStop/group/newest-cni/serial/SecondStart 47.94
382 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.02
383 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.09
385 TestStartStop/group/old-k8s-version/serial/Pause 2.49
386 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
387 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
388 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.28
389 TestStartStop/group/newest-cni/serial/Pause 2.33
x
+
TestDownloadOnly/v1.16.0/json-events (70.41s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-753377 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=kvm2 
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-753377 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=kvm2 : (1m10.407741181s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (70.41s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-753377
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-753377: exit status 85 (81.202574ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-753377 | jenkins | v1.31.2 | 25 Oct 23 21:11 UTC |          |
	|         | -p download-only-753377        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/25 21:11:27
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.21.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 21:11:27.042930   88256 out.go:296] Setting OutFile to fd 1 ...
	I1025 21:11:27.043038   88256 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 21:11:27.043047   88256 out.go:309] Setting ErrFile to fd 2...
	I1025 21:11:27.043051   88256 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 21:11:27.043226   88256 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17488-80960/.minikube/bin
	W1025 21:11:27.043360   88256 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17488-80960/.minikube/config/config.json: open /home/jenkins/minikube-integration/17488-80960/.minikube/config/config.json: no such file or directory
	I1025 21:11:27.044004   88256 out.go:303] Setting JSON to true
	I1025 21:11:27.044830   88256 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":10422,"bootTime":1698257865,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1025 21:11:27.044894   88256 start.go:138] virtualization: kvm guest
	I1025 21:11:27.047474   88256 out.go:97] [download-only-753377] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I1025 21:11:27.049321   88256 out.go:169] MINIKUBE_LOCATION=17488
	W1025 21:11:27.047607   88256 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/17488-80960/.minikube/cache/preloaded-tarball: no such file or directory
	I1025 21:11:27.047684   88256 notify.go:220] Checking for updates...
	I1025 21:11:27.052076   88256 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 21:11:27.053478   88256 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17488-80960/kubeconfig
	I1025 21:11:27.054921   88256 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17488-80960/.minikube
	I1025 21:11:27.056492   88256 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1025 21:11:27.059118   88256 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1025 21:11:27.059397   88256 driver.go:378] Setting default libvirt URI to qemu:///system
	I1025 21:11:27.095090   88256 out.go:97] Using the kvm2 driver based on user configuration
	I1025 21:11:27.095123   88256 start.go:298] selected driver: kvm2
	I1025 21:11:27.095129   88256 start.go:902] validating driver "kvm2" against <nil>
	I1025 21:11:27.095436   88256 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 21:11:27.095510   88256 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17488-80960/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1025 21:11:27.110000   88256 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I1025 21:11:27.110049   88256 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1025 21:11:27.110559   88256 start_flags.go:386] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I1025 21:11:27.110704   88256 start_flags.go:908] Wait components to verify : map[apiserver:true system_pods:true]
	I1025 21:11:27.110770   88256 cni.go:84] Creating CNI manager for ""
	I1025 21:11:27.110787   88256 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1025 21:11:27.110798   88256 start_flags.go:323] config:
	{Name:download-only-753377 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-753377 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunt
ime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1025 21:11:27.111026   88256 iso.go:125] acquiring lock: {Name:mk6659ecb6ed7b24fa2ae65bc0b8e3b5916d75e9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 21:11:27.112895   88256 out.go:97] Downloading VM boot image ...
	I1025 21:11:27.112932   88256 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-amd64.iso.sha256 -> /home/jenkins/minikube-integration/17488-80960/.minikube/cache/iso/amd64/minikube-v1.31.0-1697471113-17434-amd64.iso
	I1025 21:11:40.658276   88256 out.go:97] Starting control plane node download-only-753377 in cluster download-only-753377
	I1025 21:11:40.658384   88256 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1025 21:11:40.835203   88256 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I1025 21:11:40.835237   88256 cache.go:56] Caching tarball of preloaded images
	I1025 21:11:40.835424   88256 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1025 21:11:40.837478   88256 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I1025 21:11:40.837502   88256 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I1025 21:11:41.022060   88256 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4?checksum=md5:326f3ce331abb64565b50b8c9e791244 -> /home/jenkins/minikube-integration/17488-80960/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I1025 21:11:59.376364   88256 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I1025 21:11:59.376458   88256 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17488-80960/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I1025 21:12:00.112519   88256 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I1025 21:12:00.112874   88256 profile.go:148] Saving config to /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/download-only-753377/config.json ...
	I1025 21:12:00.112906   88256 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/download-only-753377/config.json: {Name:mkd2ff8d94f8a4a6a7ae6c021fc1e2ab2ec77c7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 21:12:00.113069   88256 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1025 21:12:00.113230   88256 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/linux/amd64/kubectl.sha1 -> /home/jenkins/minikube-integration/17488-80960/.minikube/cache/linux/amd64/v1.16.0/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-753377"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/json-events (22.69s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-753377 --force --alsologtostderr --kubernetes-version=v1.28.3 --container-runtime=docker --driver=kvm2 
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-753377 --force --alsologtostderr --kubernetes-version=v1.28.3 --container-runtime=docker --driver=kvm2 : (22.686910652s)
--- PASS: TestDownloadOnly/v1.28.3/json-events (22.69s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/preload-exists
--- PASS: TestDownloadOnly/v1.28.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-753377
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-753377: exit status 85 (74.000436ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-753377 | jenkins | v1.31.2 | 25 Oct 23 21:11 UTC |          |
	|         | -p download-only-753377        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-753377 | jenkins | v1.31.2 | 25 Oct 23 21:12 UTC |          |
	|         | -p download-only-753377        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.3   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/25 21:12:37
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.21.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 21:12:37.535271   88446 out.go:296] Setting OutFile to fd 1 ...
	I1025 21:12:37.535426   88446 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 21:12:37.535436   88446 out.go:309] Setting ErrFile to fd 2...
	I1025 21:12:37.535441   88446 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 21:12:37.535659   88446 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17488-80960/.minikube/bin
	W1025 21:12:37.535764   88446 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17488-80960/.minikube/config/config.json: open /home/jenkins/minikube-integration/17488-80960/.minikube/config/config.json: no such file or directory
	I1025 21:12:37.536179   88446 out.go:303] Setting JSON to true
	I1025 21:12:37.537025   88446 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":10493,"bootTime":1698257865,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1025 21:12:37.537092   88446 start.go:138] virtualization: kvm guest
	I1025 21:12:37.539762   88446 out.go:97] [download-only-753377] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I1025 21:12:37.541848   88446 out.go:169] MINIKUBE_LOCATION=17488
	I1025 21:12:37.540043   88446 notify.go:220] Checking for updates...
	I1025 21:12:37.547876   88446 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 21:12:37.549813   88446 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17488-80960/kubeconfig
	I1025 21:12:37.551754   88446 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17488-80960/.minikube
	I1025 21:12:37.553493   88446 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1025 21:12:37.556673   88446 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1025 21:12:37.557227   88446 config.go:182] Loaded profile config "download-only-753377": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	W1025 21:12:37.557293   88446 start.go:810] api.Load failed for download-only-753377: filestore "download-only-753377": Docker machine "download-only-753377" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1025 21:12:37.557410   88446 driver.go:378] Setting default libvirt URI to qemu:///system
	W1025 21:12:37.557453   88446 start.go:810] api.Load failed for download-only-753377: filestore "download-only-753377": Docker machine "download-only-753377" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1025 21:12:37.589287   88446 out.go:97] Using the kvm2 driver based on existing profile
	I1025 21:12:37.589326   88446 start.go:298] selected driver: kvm2
	I1025 21:12:37.589334   88446 start.go:902] validating driver "kvm2" against &{Name:download-only-753377 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.16.0 ClusterName:download-only-753377 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1025 21:12:37.589787   88446 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 21:12:37.589911   88446 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17488-80960/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1025 21:12:37.604990   88446 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I1025 21:12:37.606115   88446 cni.go:84] Creating CNI manager for ""
	I1025 21:12:37.606150   88446 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1025 21:12:37.606167   88446 start_flags.go:323] config:
	{Name:download-only-753377 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:download-only-753377 Namespace:defa
ult APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: Sock
etVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1025 21:12:37.606374   88446 iso.go:125] acquiring lock: {Name:mk6659ecb6ed7b24fa2ae65bc0b8e3b5916d75e9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 21:12:37.608492   88446 out.go:97] Starting control plane node download-only-753377 in cluster download-only-753377
	I1025 21:12:37.608508   88446 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
	I1025 21:12:38.433309   88446 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.3/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4
	I1025 21:12:38.433350   88446 cache.go:56] Caching tarball of preloaded images
	I1025 21:12:38.433554   88446 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
	I1025 21:12:38.436005   88446 out.go:97] Downloading Kubernetes v1.28.3 preload ...
	I1025 21:12:38.436031   88446 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4 ...
	I1025 21:12:38.616710   88446 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.3/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4?checksum=md5:82104bbf889ff8b69d5c141ce86c05ac -> /home/jenkins/minikube-integration/17488-80960/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-753377"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.3/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:190: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:202: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-753377
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.58s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:307: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-756855 --alsologtostderr --binary-mirror http://127.0.0.1:44683 --driver=kvm2 
helpers_test.go:175: Cleaning up "binary-mirror-756855" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-756855
--- PASS: TestBinaryMirror (0.58s)

                                                
                                    
x
+
TestOffline (112.35s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-docker-945046 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2 
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-docker-945046 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2 : (1m51.161514017s)
helpers_test.go:175: Cleaning up "offline-docker-945046" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-docker-945046
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-docker-945046: (1.193145485s)
--- PASS: TestOffline (112.35s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:927: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-245571
addons_test.go:927: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-245571: exit status 85 (69.963321ms)

                                                
                                                
-- stdout --
	* Profile "addons-245571" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-245571"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:938: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-245571
addons_test.go:938: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-245571: exit status 85 (73.553019ms)

                                                
                                                
-- stdout --
	* Profile "addons-245571" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-245571"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (222.38s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-amd64 start -p addons-245571 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --driver=kvm2  --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-linux-amd64 start -p addons-245571 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --driver=kvm2  --addons=ingress --addons=ingress-dns --addons=helm-tiller: (3m42.38161369s)
--- PASS: TestAddons/Setup (222.38s)

                                                
                                    
x
+
TestAddons/parallel/Registry (24.63s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:329: registry stabilized in 19.974742ms
addons_test.go:331: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-jp7r8" [5aca4cdd-00fe-4119-9e33-076d1652d48e] Running
addons_test.go:331: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.03428109s
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-t7hgs" [b7f9e3fb-829f-4392-97a0-aec0bffe8750] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.013710739s
addons_test.go:339: (dbg) Run:  kubectl --context addons-245571 delete po -l run=registry-test --now
addons_test.go:344: (dbg) Run:  kubectl --context addons-245571 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:344: (dbg) Done: kubectl --context addons-245571 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (13.474840927s)
addons_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p addons-245571 ip
2023/10/25 21:17:07 [DEBUG] GET http://192.168.39.24:5000
addons_test.go:387: (dbg) Run:  out/minikube-linux-amd64 -p addons-245571 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (24.63s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (25.65s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:206: (dbg) Run:  kubectl --context addons-245571 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:231: (dbg) Run:  kubectl --context addons-245571 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:244: (dbg) Run:  kubectl --context addons-245571 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:249: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [971dc95a-7241-40de-ae3d-b6743d003800] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [971dc95a-7241-40de-ae3d-b6743d003800] Running
addons_test.go:249: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 13.019070184s
addons_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p addons-245571 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:285: (dbg) Run:  kubectl --context addons-245571 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:285: (dbg) Done: kubectl --context addons-245571 replace --force -f testdata/ingress-dns-example-v1.yaml: (1.052024834s)
addons_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p addons-245571 ip
addons_test.go:296: (dbg) Run:  nslookup hello-john.test 192.168.39.24
addons_test.go:305: (dbg) Run:  out/minikube-linux-amd64 -p addons-245571 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:305: (dbg) Done: out/minikube-linux-amd64 -p addons-245571 addons disable ingress-dns --alsologtostderr -v=1: (2.625908294s)
addons_test.go:310: (dbg) Run:  out/minikube-linux-amd64 -p addons-245571 addons disable ingress --alsologtostderr -v=1
addons_test.go:310: (dbg) Done: out/minikube-linux-amd64 -p addons-245571 addons disable ingress --alsologtostderr -v=1: (7.688865218s)
--- PASS: TestAddons/parallel/Ingress (25.65s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.25s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:837: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-5b2ds" [fec10d70-7724-44cc-a9f5-7817829f8897] Running
addons_test.go:837: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.014066709s
addons_test.go:840: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-245571
addons_test.go:840: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-245571: (6.233517296s)
--- PASS: TestAddons/parallel/InspektorGadget (11.25s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.98s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:406: metrics-server stabilized in 7.380628ms
addons_test.go:408: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-jdwvv" [7dc7340d-9518-4570-9191-ce6c773d0365] Running
addons_test.go:408: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.031904321s
addons_test.go:414: (dbg) Run:  kubectl --context addons-245571 top pods -n kube-system
addons_test.go:431: (dbg) Run:  out/minikube-linux-amd64 -p addons-245571 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.98s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (19.05s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:455: tiller-deploy stabilized in 19.528996ms
addons_test.go:457: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-sxq48" [768f86b3-209f-4e3f-92ea-a1a0e22bc287] Running
addons_test.go:457: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.028623484s
addons_test.go:472: (dbg) Run:  kubectl --context addons-245571 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:472: (dbg) Done: kubectl --context addons-245571 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (13.415350438s)
addons_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p addons-245571 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (19.05s)

                                                
                                    
x
+
TestAddons/parallel/CSI (56.92s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:560: csi-hostpath-driver pods stabilized in 6.110165ms
addons_test.go:563: (dbg) Run:  kubectl --context addons-245571 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:568: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-245571 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-245571 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-245571 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-245571 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-245571 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-245571 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-245571 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-245571 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-245571 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-245571 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:573: (dbg) Run:  kubectl --context addons-245571 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:578: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [68d19a49-e9d7-4f00-9761-b327b0f37f56] Pending
helpers_test.go:344: "task-pv-pod" [68d19a49-e9d7-4f00-9761-b327b0f37f56] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [68d19a49-e9d7-4f00-9761-b327b0f37f56] Running
addons_test.go:578: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 21.029145901s
addons_test.go:583: (dbg) Run:  kubectl --context addons-245571 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:588: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-245571 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-245571 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-245571 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:593: (dbg) Run:  kubectl --context addons-245571 delete pod task-pv-pod
addons_test.go:593: (dbg) Done: kubectl --context addons-245571 delete pod task-pv-pod: (1.32452313s)
addons_test.go:599: (dbg) Run:  kubectl --context addons-245571 delete pvc hpvc
addons_test.go:605: (dbg) Run:  kubectl --context addons-245571 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:610: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-245571 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-245571 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-245571 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-245571 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-245571 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-245571 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-245571 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-245571 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-245571 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:615: (dbg) Run:  kubectl --context addons-245571 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:620: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [59e3ac55-8039-4837-bab5-28ee962e0fc5] Pending
helpers_test.go:344: "task-pv-pod-restore" [59e3ac55-8039-4837-bab5-28ee962e0fc5] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [59e3ac55-8039-4837-bab5-28ee962e0fc5] Running
addons_test.go:620: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.028927014s
addons_test.go:625: (dbg) Run:  kubectl --context addons-245571 delete pod task-pv-pod-restore
addons_test.go:625: (dbg) Done: kubectl --context addons-245571 delete pod task-pv-pod-restore: (1.046529104s)
addons_test.go:629: (dbg) Run:  kubectl --context addons-245571 delete pvc hpvc-restore
addons_test.go:633: (dbg) Run:  kubectl --context addons-245571 delete volumesnapshot new-snapshot-demo
addons_test.go:637: (dbg) Run:  out/minikube-linux-amd64 -p addons-245571 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:637: (dbg) Done: out/minikube-linux-amd64 -p addons-245571 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.759928553s)
addons_test.go:641: (dbg) Run:  out/minikube-linux-amd64 -p addons-245571 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (56.92s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (18.46s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:823: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-245571 --alsologtostderr -v=1
addons_test.go:823: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-245571 --alsologtostderr -v=1: (1.447101008s)
addons_test.go:828: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-94b766c-nz44f" [3e89fffc-39bc-4afb-a4a3-d47a66b86d40] Pending
helpers_test.go:344: "headlamp-94b766c-nz44f" [3e89fffc-39bc-4afb-a4a3-d47a66b86d40] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-94b766c-nz44f" [3e89fffc-39bc-4afb-a4a3-d47a66b86d40] Running
addons_test.go:828: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 17.015516298s
--- PASS: TestAddons/parallel/Headlamp (18.46s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.59s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:856: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-56665cdfc-qhkpm" [2fd5699e-1761-4f4a-a975-b7d9c746c5ea] Running
addons_test.go:856: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.011580382s
addons_test.go:859: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-245571
--- PASS: TestAddons/parallel/CloudSpanner (5.59s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (23.45s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:872: (dbg) Run:  kubectl --context addons-245571 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:878: (dbg) Run:  kubectl --context addons-245571 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:882: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-245571 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-245571 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-245571 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-245571 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-245571 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-245571 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-245571 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-245571 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-245571 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-245571 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-245571 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-245571 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-245571 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-245571 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-245571 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-245571 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-245571 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-245571 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:885: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [7af7651e-fd21-4e95-8931-f7932eb21349] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [7af7651e-fd21-4e95-8931-f7932eb21349] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [7af7651e-fd21-4e95-8931-f7932eb21349] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:885: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 6.011849096s
addons_test.go:890: (dbg) Run:  kubectl --context addons-245571 get pvc test-pvc -o=json
addons_test.go:899: (dbg) Run:  out/minikube-linux-amd64 -p addons-245571 ssh "cat /opt/local-path-provisioner/pvc-a6a29570-852f-46a0-b8a6-7ba369e6216e_default_test-pvc/file1"
addons_test.go:911: (dbg) Run:  kubectl --context addons-245571 delete pod test-local-path
addons_test.go:915: (dbg) Run:  kubectl --context addons-245571 delete pvc test-pvc
addons_test.go:919: (dbg) Run:  out/minikube-linux-amd64 -p addons-245571 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (23.45s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.57s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:951: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-swpd4" [7917affc-1fc8-43a4-a434-fdcfcc4adee8] Running
addons_test.go:951: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.027985279s
addons_test.go:954: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-245571
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.57s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.15s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:649: (dbg) Run:  kubectl --context addons-245571 create ns new-namespace
addons_test.go:663: (dbg) Run:  kubectl --context addons-245571 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.15s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (13.43s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:171: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-245571
addons_test.go:171: (dbg) Done: out/minikube-linux-amd64 stop -p addons-245571: (13.117434309s)
addons_test.go:175: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-245571
addons_test.go:179: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-245571
addons_test.go:184: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-245571
--- PASS: TestAddons/StoppedEnableDisable (13.43s)

                                                
                                    
x
+
TestCertOptions (89.28s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-623638 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2 
E1025 22:02:25.192327   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/functional-389152/client.crt: no such file or directory
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-623638 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2 : (1m27.508672073s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-623638 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-623638 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-623638 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-623638" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-623638
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-623638: (1.168747472s)
--- PASS: TestCertOptions (89.28s)

                                                
                                    
x
+
TestCertExpiration (312.2s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-107384 --memory=2048 --cert-expiration=3m --driver=kvm2 
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-107384 --memory=2048 --cert-expiration=3m --driver=kvm2 : (59.740498452s)
E1025 22:03:15.639808   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/skaffold-183899/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-107384 --memory=2048 --cert-expiration=8760h --driver=kvm2 
E1025 22:06:20.370715   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/gvisor-342758/client.crt: no such file or directory
E1025 22:06:43.704683   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/addons-245571/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-107384 --memory=2048 --cert-expiration=8760h --driver=kvm2 : (1m11.420042231s)
helpers_test.go:175: Cleaning up "cert-expiration-107384" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-107384
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-107384: (1.037170801s)
--- PASS: TestCertExpiration (312.20s)

                                                
                                    
x
+
TestDockerFlags (79.43s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-linux-amd64 start -p docker-flags-847762 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=kvm2 
E1025 22:02:16.250594   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/ingress-addon-legacy-106045/client.crt: no such file or directory
docker_test.go:51: (dbg) Done: out/minikube-linux-amd64 start -p docker-flags-847762 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=kvm2 : (1m17.588625207s)
docker_test.go:56: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-847762 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-847762 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-847762" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-flags-847762
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-flags-847762: (1.139148171s)
--- PASS: TestDockerFlags (79.43s)

                                                
                                    
x
+
TestForceSystemdFlag (56.19s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-947263 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2 
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-947263 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2 : (54.820800032s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-947263 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-947263" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-947263
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-947263: (1.072264319s)
--- PASS: TestForceSystemdFlag (56.19s)

                                                
                                    
x
+
TestForceSystemdEnv (56.78s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-024294 --memory=2048 --alsologtostderr -v=5 --driver=kvm2 
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-024294 --memory=2048 --alsologtostderr -v=5 --driver=kvm2 : (55.47667703s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-024294 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-024294" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-024294
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-024294: (1.016050636s)
--- PASS: TestForceSystemdEnv (56.78s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (6.94s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
E1025 21:57:25.192109   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/functional-389152/client.crt: no such file or directory
--- PASS: TestKVMDriverInstallOrUpdate (6.94s)

                                                
                                    
x
+
TestErrorSpam/setup (51.45s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-579427 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-579427 --driver=kvm2 
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-579427 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-579427 --driver=kvm2 : (51.451248821s)
--- PASS: TestErrorSpam/setup (51.45s)

                                                
                                    
x
+
TestErrorSpam/start (0.4s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-579427 --log_dir /tmp/nospam-579427 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-579427 --log_dir /tmp/nospam-579427 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-579427 --log_dir /tmp/nospam-579427 start --dry-run
--- PASS: TestErrorSpam/start (0.40s)

                                                
                                    
x
+
TestErrorSpam/status (0.85s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-579427 --log_dir /tmp/nospam-579427 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-579427 --log_dir /tmp/nospam-579427 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-579427 --log_dir /tmp/nospam-579427 status
--- PASS: TestErrorSpam/status (0.85s)

                                                
                                    
x
+
TestErrorSpam/pause (1.24s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-579427 --log_dir /tmp/nospam-579427 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-579427 --log_dir /tmp/nospam-579427 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-579427 --log_dir /tmp/nospam-579427 pause
--- PASS: TestErrorSpam/pause (1.24s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.41s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-579427 --log_dir /tmp/nospam-579427 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-579427 --log_dir /tmp/nospam-579427 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-579427 --log_dir /tmp/nospam-579427 unpause
--- PASS: TestErrorSpam/unpause (1.41s)

                                                
                                    
x
+
TestErrorSpam/stop (13.29s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-579427 --log_dir /tmp/nospam-579427 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-579427 --log_dir /tmp/nospam-579427 stop: (13.115331296s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-579427 --log_dir /tmp/nospam-579427 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-579427 --log_dir /tmp/nospam-579427 stop
--- PASS: TestErrorSpam/stop (13.29s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/17488-80960/.minikube/files/etc/test/nested/copy/88244/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (68.32s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-389152 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2 
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-389152 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2 : (1m8.320109072s)
--- PASS: TestFunctional/serial/StartWithProxy (68.32s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (38.97s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-389152 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-389152 --alsologtostderr -v=8: (38.968900765s)
functional_test.go:659: soft start took 38.969533468s for "functional-389152" cluster.
--- PASS: TestFunctional/serial/SoftStart (38.97s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-389152 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.78s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-389152 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-389152 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-389152 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.78s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-389152 /tmp/TestFunctionalserialCacheCmdcacheadd_local510963026/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-389152 cache add minikube-local-cache-test:functional-389152
functional_test.go:1085: (dbg) Done: out/minikube-linux-amd64 -p functional-389152 cache add minikube-local-cache-test:functional-389152: (1.785406145s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-389152 cache delete minikube-local-cache-test:functional-389152
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-389152
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.12s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.25s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-389152 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.25s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.32s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-389152 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-389152 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-389152 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (263.576348ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-389152 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-389152 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.32s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-389152 kubectl -- --context functional-389152 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-389152 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (40.81s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-389152 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1025 21:21:43.704694   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/addons-245571/client.crt: no such file or directory
E1025 21:21:43.710392   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/addons-245571/client.crt: no such file or directory
E1025 21:21:43.720680   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/addons-245571/client.crt: no such file or directory
E1025 21:21:43.740975   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/addons-245571/client.crt: no such file or directory
E1025 21:21:43.781295   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/addons-245571/client.crt: no such file or directory
E1025 21:21:43.861660   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/addons-245571/client.crt: no such file or directory
E1025 21:21:44.022181   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/addons-245571/client.crt: no such file or directory
E1025 21:21:44.342902   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/addons-245571/client.crt: no such file or directory
E1025 21:21:44.983924   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/addons-245571/client.crt: no such file or directory
E1025 21:21:46.264461   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/addons-245571/client.crt: no such file or directory
E1025 21:21:48.825698   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/addons-245571/client.crt: no such file or directory
E1025 21:21:53.946603   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/addons-245571/client.crt: no such file or directory
E1025 21:22:04.186982   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/addons-245571/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-389152 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (40.810271161s)
functional_test.go:757: restart took 40.810403942s for "functional-389152" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (40.81s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-389152 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.12s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-389152 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-389152 logs: (1.120316257s)
--- PASS: TestFunctional/serial/LogsCmd (1.12s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.15s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-389152 logs --file /tmp/TestFunctionalserialLogsFileCmd2849797089/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-389152 logs --file /tmp/TestFunctionalserialLogsFileCmd2849797089/001/logs.txt: (1.153256497s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.15s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (5.18s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-389152 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-389152
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-389152: exit status 115 (294.62359ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.102:31199 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-389152 delete -f testdata/invalidsvc.yaml
E1025 21:22:24.667616   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/addons-245571/client.crt: no such file or directory
functional_test.go:2323: (dbg) Done: kubectl --context functional-389152 delete -f testdata/invalidsvc.yaml: (1.56385173s)
--- PASS: TestFunctional/serial/InvalidService (5.18s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-389152 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-389152 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-389152 config get cpus: exit status 14 (75.007785ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-389152 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-389152 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-389152 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-389152 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-389152 config get cpus: exit status 14 (61.724011ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (46.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-389152 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-389152 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 95691: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (46.21s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-389152 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-389152 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 : exit status 23 (146.057528ms)

                                                
                                                
-- stdout --
	* [functional-389152] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17488
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17488-80960/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17488-80960/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 21:22:49.162205   95573 out.go:296] Setting OutFile to fd 1 ...
	I1025 21:22:49.162334   95573 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 21:22:49.162343   95573 out.go:309] Setting ErrFile to fd 2...
	I1025 21:22:49.162348   95573 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 21:22:49.162550   95573 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17488-80960/.minikube/bin
	I1025 21:22:49.163093   95573 out.go:303] Setting JSON to false
	I1025 21:22:49.164004   95573 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":11104,"bootTime":1698257865,"procs":225,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1025 21:22:49.164067   95573 start.go:138] virtualization: kvm guest
	I1025 21:22:49.165924   95573 out.go:177] * [functional-389152] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I1025 21:22:49.167647   95573 out.go:177]   - MINIKUBE_LOCATION=17488
	I1025 21:22:49.167663   95573 notify.go:220] Checking for updates...
	I1025 21:22:49.168959   95573 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 21:22:49.170243   95573 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17488-80960/kubeconfig
	I1025 21:22:49.171504   95573 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17488-80960/.minikube
	I1025 21:22:49.172801   95573 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1025 21:22:49.174061   95573 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 21:22:49.175654   95573 config.go:182] Loaded profile config "functional-389152": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1025 21:22:49.176064   95573 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1025 21:22:49.176119   95573 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 21:22:49.190518   95573 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40009
	I1025 21:22:49.190939   95573 main.go:141] libmachine: () Calling .GetVersion
	I1025 21:22:49.191540   95573 main.go:141] libmachine: Using API Version  1
	I1025 21:22:49.191566   95573 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 21:22:49.191918   95573 main.go:141] libmachine: () Calling .GetMachineName
	I1025 21:22:49.192082   95573 main.go:141] libmachine: (functional-389152) Calling .DriverName
	I1025 21:22:49.192344   95573 driver.go:378] Setting default libvirt URI to qemu:///system
	I1025 21:22:49.192625   95573 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1025 21:22:49.192657   95573 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 21:22:49.207312   95573 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39173
	I1025 21:22:49.207743   95573 main.go:141] libmachine: () Calling .GetVersion
	I1025 21:22:49.208277   95573 main.go:141] libmachine: Using API Version  1
	I1025 21:22:49.208307   95573 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 21:22:49.208687   95573 main.go:141] libmachine: () Calling .GetMachineName
	I1025 21:22:49.208929   95573 main.go:141] libmachine: (functional-389152) Calling .DriverName
	I1025 21:22:49.241473   95573 out.go:177] * Using the kvm2 driver based on existing profile
	I1025 21:22:49.243098   95573 start.go:298] selected driver: kvm2
	I1025 21:22:49.243119   95573 start.go:902] validating driver "kvm2" against &{Name:functional-389152 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.3 ClusterName:functional-389152 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.39.102 Port:8441 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 C
ertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1025 21:22:49.243234   95573 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 21:22:49.245522   95573 out.go:177] 
	W1025 21:22:49.247441   95573 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1025 21:22:49.249192   95573 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-389152 --dry-run --alsologtostderr -v=1 --driver=kvm2 
--- PASS: TestFunctional/parallel/DryRun (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-389152 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-389152 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 : exit status 23 (153.449468ms)

                                                
                                                
-- stdout --
	* [functional-389152] minikube v1.31.2 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17488
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17488-80960/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17488-80960/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 21:22:49.457670   95628 out.go:296] Setting OutFile to fd 1 ...
	I1025 21:22:49.457955   95628 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 21:22:49.457966   95628 out.go:309] Setting ErrFile to fd 2...
	I1025 21:22:49.457973   95628 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 21:22:49.458249   95628 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17488-80960/.minikube/bin
	I1025 21:22:49.458808   95628 out.go:303] Setting JSON to false
	I1025 21:22:49.459702   95628 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":11105,"bootTime":1698257865,"procs":224,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1025 21:22:49.459768   95628 start.go:138] virtualization: kvm guest
	I1025 21:22:49.461857   95628 out.go:177] * [functional-389152] minikube v1.31.2 sur Ubuntu 20.04 (kvm/amd64)
	I1025 21:22:49.463408   95628 out.go:177]   - MINIKUBE_LOCATION=17488
	I1025 21:22:49.463375   95628 notify.go:220] Checking for updates...
	I1025 21:22:49.464832   95628 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 21:22:49.466299   95628 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17488-80960/kubeconfig
	I1025 21:22:49.467613   95628 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17488-80960/.minikube
	I1025 21:22:49.468999   95628 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1025 21:22:49.470200   95628 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 21:22:49.471818   95628 config.go:182] Loaded profile config "functional-389152": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1025 21:22:49.472334   95628 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1025 21:22:49.472432   95628 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 21:22:49.486637   95628 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46663
	I1025 21:22:49.487050   95628 main.go:141] libmachine: () Calling .GetVersion
	I1025 21:22:49.487691   95628 main.go:141] libmachine: Using API Version  1
	I1025 21:22:49.487721   95628 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 21:22:49.488076   95628 main.go:141] libmachine: () Calling .GetMachineName
	I1025 21:22:49.488292   95628 main.go:141] libmachine: (functional-389152) Calling .DriverName
	I1025 21:22:49.488569   95628 driver.go:378] Setting default libvirt URI to qemu:///system
	I1025 21:22:49.488870   95628 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1025 21:22:49.488906   95628 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 21:22:49.504113   95628 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36623
	I1025 21:22:49.504520   95628 main.go:141] libmachine: () Calling .GetVersion
	I1025 21:22:49.504992   95628 main.go:141] libmachine: Using API Version  1
	I1025 21:22:49.505011   95628 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 21:22:49.505361   95628 main.go:141] libmachine: () Calling .GetMachineName
	I1025 21:22:49.505575   95628 main.go:141] libmachine: (functional-389152) Calling .DriverName
	I1025 21:22:49.541597   95628 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I1025 21:22:49.542951   95628 start.go:298] selected driver: kvm2
	I1025 21:22:49.542965   95628 start.go:902] validating driver "kvm2" against &{Name:functional-389152 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17434/minikube-v1.31.0-1697471113-17434-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1698055645-17423@sha256:fb2566ae68d58d9dce5cb4087954a42bedc9f0c47c18aef3d28a238a8beeb883 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.3 ClusterName:functional-389152 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.39.102 Port:8441 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 C
ertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1025 21:22:49.543115   95628 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 21:22:49.545169   95628 out.go:177] 
	W1025 21:22:49.546453   95628 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1025 21:22:49.547904   95628 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-389152 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-389152 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-389152 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.88s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (11.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1628: (dbg) Run:  kubectl --context functional-389152 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-389152 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-pq8p2" [3aaeabf9-8d36-4256-9e34-da37b7bbd86a] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-pq8p2" [3aaeabf9-8d36-4256-9e34-da37b7bbd86a] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 11.023341243s
functional_test.go:1648: (dbg) Run:  out/minikube-linux-amd64 -p functional-389152 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.39.102:30441
functional_test.go:1674: http://192.168.39.102:30441: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-55497b8b78-pq8p2

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.102:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.102:30441
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (11.56s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-linux-amd64 -p functional-389152 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-linux-amd64 -p functional-389152 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (66.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [bfe8eb8e-0cfa-46f6-9e33-0b62ab7aae0b] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.069347045s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-389152 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-389152 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-389152 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-389152 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-389152 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [e30b944d-589d-460e-8ae3-c1aebddb46b7] Pending
helpers_test.go:344: "sp-pod" [e30b944d-589d-460e-8ae3-c1aebddb46b7] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [e30b944d-589d-460e-8ae3-c1aebddb46b7] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 20.025554612s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-389152 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-389152 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-389152 delete -f testdata/storage-provisioner/pod.yaml: (1.581454037s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-389152 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [bda2e511-1c90-4852-822f-784c4872e79d] Pending
helpers_test.go:344: "sp-pod" [bda2e511-1c90-4852-822f-784c4872e79d] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
E1025 21:23:05.627795   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/addons-245571/client.crt: no such file or directory
helpers_test.go:344: "sp-pod" [bda2e511-1c90-4852-822f-784c4872e79d] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 36.023550114s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-389152 exec sp-pod -- ls /tmp/mount
2023/10/25 21:23:35 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (66.64s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-linux-amd64 -p functional-389152 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-linux-amd64 -p functional-389152 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-389152 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-389152 ssh -n functional-389152 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-389152 cp functional-389152:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd299219072/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-389152 ssh -n functional-389152 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.94s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (40.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-389152 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-mqbjp" [9d20c9a0-af6d-4504-bfd1-153705d7ce2e] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-mqbjp" [9d20c9a0-af6d-4504-bfd1-153705d7ce2e] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 35.033277988s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-389152 exec mysql-859648c796-mqbjp -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-389152 exec mysql-859648c796-mqbjp -- mysql -ppassword -e "show databases;": exit status 1 (202.786588ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-389152 exec mysql-859648c796-mqbjp -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-389152 exec mysql-859648c796-mqbjp -- mysql -ppassword -e "show databases;": exit status 1 (228.082277ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-389152 exec mysql-859648c796-mqbjp -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-389152 exec mysql-859648c796-mqbjp -- mysql -ppassword -e "show databases;": exit status 1 (258.885276ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-389152 exec mysql-859648c796-mqbjp -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (40.22s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/88244/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-389152 ssh "sudo cat /etc/test/nested/copy/88244/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/88244.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-389152 ssh "sudo cat /etc/ssl/certs/88244.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/88244.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-389152 ssh "sudo cat /usr/share/ca-certificates/88244.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-389152 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/882442.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-389152 ssh "sudo cat /etc/ssl/certs/882442.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/882442.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-389152 ssh "sudo cat /usr/share/ca-certificates/882442.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-389152 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.57s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-389152 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-389152 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-389152 ssh "sudo systemctl is-active crio": exit status 1 (240.025981ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.88s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-389152 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-389152 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (13.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1438: (dbg) Run:  kubectl --context functional-389152 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-389152 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-mm64m" [2cf596ee-e5bb-45d2-8ed0-818acbc615e1] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-mm64m" [2cf596ee-e5bb-45d2-8ed0-818acbc615e1] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 13.028076462s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (13.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-389152 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-389152 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.3
registry.k8s.io/kube-proxy:v1.28.3
registry.k8s.io/kube-controller-manager:v1.28.3
registry.k8s.io/kube-apiserver:v1.28.3
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-389152
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-389152
docker.io/kubernetesui/metrics-scraper:<none>
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-389152 image ls --format short --alsologtostderr:
I1025 21:23:20.320685   96471 out.go:296] Setting OutFile to fd 1 ...
I1025 21:23:20.320809   96471 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1025 21:23:20.320814   96471 out.go:309] Setting ErrFile to fd 2...
I1025 21:23:20.320819   96471 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1025 21:23:20.321005   96471 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17488-80960/.minikube/bin
I1025 21:23:20.321633   96471 config.go:182] Loaded profile config "functional-389152": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
I1025 21:23:20.321741   96471 config.go:182] Loaded profile config "functional-389152": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
I1025 21:23:20.322138   96471 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1025 21:23:20.322197   96471 main.go:141] libmachine: Launching plugin server for driver kvm2
I1025 21:23:20.337281   96471 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44483
I1025 21:23:20.337728   96471 main.go:141] libmachine: () Calling .GetVersion
I1025 21:23:20.338323   96471 main.go:141] libmachine: Using API Version  1
I1025 21:23:20.338349   96471 main.go:141] libmachine: () Calling .SetConfigRaw
I1025 21:23:20.338758   96471 main.go:141] libmachine: () Calling .GetMachineName
I1025 21:23:20.338964   96471 main.go:141] libmachine: (functional-389152) Calling .GetState
I1025 21:23:20.340976   96471 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1025 21:23:20.341013   96471 main.go:141] libmachine: Launching plugin server for driver kvm2
I1025 21:23:20.354886   96471 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46657
I1025 21:23:20.355266   96471 main.go:141] libmachine: () Calling .GetVersion
I1025 21:23:20.355713   96471 main.go:141] libmachine: Using API Version  1
I1025 21:23:20.355747   96471 main.go:141] libmachine: () Calling .SetConfigRaw
I1025 21:23:20.356078   96471 main.go:141] libmachine: () Calling .GetMachineName
I1025 21:23:20.356277   96471 main.go:141] libmachine: (functional-389152) Calling .DriverName
I1025 21:23:20.356473   96471 ssh_runner.go:195] Run: systemctl --version
I1025 21:23:20.356494   96471 main.go:141] libmachine: (functional-389152) Calling .GetSSHHostname
I1025 21:23:20.359442   96471 main.go:141] libmachine: (functional-389152) DBG | domain functional-389152 has defined MAC address 52:54:00:93:5c:22 in network mk-functional-389152
I1025 21:23:20.359933   96471 main.go:141] libmachine: (functional-389152) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:5c:22", ip: ""} in network mk-functional-389152: {Iface:virbr1 ExpiryTime:2023-10-25 22:19:58 +0000 UTC Type:0 Mac:52:54:00:93:5c:22 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:functional-389152 Clientid:01:52:54:00:93:5c:22}
I1025 21:23:20.359963   96471 main.go:141] libmachine: (functional-389152) DBG | domain functional-389152 has defined IP address 192.168.39.102 and MAC address 52:54:00:93:5c:22 in network mk-functional-389152
I1025 21:23:20.360058   96471 main.go:141] libmachine: (functional-389152) Calling .GetSSHPort
I1025 21:23:20.360242   96471 main.go:141] libmachine: (functional-389152) Calling .GetSSHKeyPath
I1025 21:23:20.360407   96471 main.go:141] libmachine: (functional-389152) Calling .GetSSHUsername
I1025 21:23:20.360556   96471 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17488-80960/.minikube/machines/functional-389152/id_rsa Username:docker}
I1025 21:23:20.481043   96471 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I1025 21:23:20.533950   96471 main.go:141] libmachine: Making call to close driver server
I1025 21:23:20.533966   96471 main.go:141] libmachine: (functional-389152) Calling .Close
I1025 21:23:20.534253   96471 main.go:141] libmachine: Successfully made call to close driver server
I1025 21:23:20.534276   96471 main.go:141] libmachine: Making call to close connection to plugin binary
I1025 21:23:20.534286   96471 main.go:141] libmachine: Making call to close driver server
I1025 21:23:20.534296   96471 main.go:141] libmachine: (functional-389152) Calling .Close
I1025 21:23:20.534517   96471 main.go:141] libmachine: Successfully made call to close driver server
I1025 21:23:20.534532   96471 main.go:141] libmachine: Making call to close connection to plugin binary
I1025 21:23:20.534544   96471 main.go:141] libmachine: (functional-389152) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-389152 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-389152 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/kube-proxy                  | v1.28.3           | bfc896cf80fba | 73.1MB |
| docker.io/library/mysql                     | 5.7               | 3b85be0b10d38 | 581MB  |
| registry.k8s.io/pause                       | 3.9               | e6f1816883972 | 744kB  |
| gcr.io/google-containers/addon-resizer      | functional-389152 | ffd4cfbbe753e | 32.9MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| docker.io/library/nginx                     | latest            | 593aee2afb642 | 187MB  |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| registry.k8s.io/kube-apiserver              | v1.28.3           | 5374347291230 | 126MB  |
| registry.k8s.io/kube-controller-manager     | v1.28.3           | 10baa1ca17068 | 122MB  |
| registry.k8s.io/etcd                        | 3.5.9-0           | 73deb9a3f7025 | 294MB  |
| registry.k8s.io/coredns/coredns             | v1.10.1           | ead0a4a53df89 | 53.6MB |
| docker.io/kubernetesui/metrics-scraper      | <none>            | 115053965e86b | 43.8MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| docker.io/library/minikube-local-cache-test | functional-389152 | ba30ce67d6c9e | 30B    |
| registry.k8s.io/kube-scheduler              | v1.28.3           | 6d1b4fd1b182d | 60.1MB |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-389152 image ls --format table --alsologtostderr:
I1025 21:23:21.180131   96594 out.go:296] Setting OutFile to fd 1 ...
I1025 21:23:21.180266   96594 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1025 21:23:21.180275   96594 out.go:309] Setting ErrFile to fd 2...
I1025 21:23:21.180280   96594 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1025 21:23:21.180458   96594 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17488-80960/.minikube/bin
I1025 21:23:21.181043   96594 config.go:182] Loaded profile config "functional-389152": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
I1025 21:23:21.181159   96594 config.go:182] Loaded profile config "functional-389152": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
I1025 21:23:21.181608   96594 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1025 21:23:21.181651   96594 main.go:141] libmachine: Launching plugin server for driver kvm2
I1025 21:23:21.196108   96594 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43165
I1025 21:23:21.196558   96594 main.go:141] libmachine: () Calling .GetVersion
I1025 21:23:21.197167   96594 main.go:141] libmachine: Using API Version  1
I1025 21:23:21.197197   96594 main.go:141] libmachine: () Calling .SetConfigRaw
I1025 21:23:21.197615   96594 main.go:141] libmachine: () Calling .GetMachineName
I1025 21:23:21.197832   96594 main.go:141] libmachine: (functional-389152) Calling .GetState
I1025 21:23:21.199866   96594 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1025 21:23:21.199911   96594 main.go:141] libmachine: Launching plugin server for driver kvm2
I1025 21:23:21.214582   96594 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39613
I1025 21:23:21.215052   96594 main.go:141] libmachine: () Calling .GetVersion
I1025 21:23:21.215597   96594 main.go:141] libmachine: Using API Version  1
I1025 21:23:21.215622   96594 main.go:141] libmachine: () Calling .SetConfigRaw
I1025 21:23:21.215917   96594 main.go:141] libmachine: () Calling .GetMachineName
I1025 21:23:21.216128   96594 main.go:141] libmachine: (functional-389152) Calling .DriverName
I1025 21:23:21.216348   96594 ssh_runner.go:195] Run: systemctl --version
I1025 21:23:21.216382   96594 main.go:141] libmachine: (functional-389152) Calling .GetSSHHostname
I1025 21:23:21.219093   96594 main.go:141] libmachine: (functional-389152) DBG | domain functional-389152 has defined MAC address 52:54:00:93:5c:22 in network mk-functional-389152
I1025 21:23:21.219594   96594 main.go:141] libmachine: (functional-389152) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:5c:22", ip: ""} in network mk-functional-389152: {Iface:virbr1 ExpiryTime:2023-10-25 22:19:58 +0000 UTC Type:0 Mac:52:54:00:93:5c:22 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:functional-389152 Clientid:01:52:54:00:93:5c:22}
I1025 21:23:21.219626   96594 main.go:141] libmachine: (functional-389152) DBG | domain functional-389152 has defined IP address 192.168.39.102 and MAC address 52:54:00:93:5c:22 in network mk-functional-389152
I1025 21:23:21.219795   96594 main.go:141] libmachine: (functional-389152) Calling .GetSSHPort
I1025 21:23:21.219992   96594 main.go:141] libmachine: (functional-389152) Calling .GetSSHKeyPath
I1025 21:23:21.220186   96594 main.go:141] libmachine: (functional-389152) Calling .GetSSHUsername
I1025 21:23:21.220351   96594 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17488-80960/.minikube/machines/functional-389152/id_rsa Username:docker}
I1025 21:23:21.320765   96594 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I1025 21:23:21.384386   96594 main.go:141] libmachine: Making call to close driver server
I1025 21:23:21.384407   96594 main.go:141] libmachine: (functional-389152) Calling .Close
I1025 21:23:21.384707   96594 main.go:141] libmachine: Successfully made call to close driver server
I1025 21:23:21.384735   96594 main.go:141] libmachine: Making call to close connection to plugin binary
I1025 21:23:21.384746   96594 main.go:141] libmachine: Making call to close driver server
I1025 21:23:21.384760   96594 main.go:141] libmachine: (functional-389152) Calling .Close
I1025 21:23:21.384719   96594 main.go:141] libmachine: (functional-389152) DBG | Closing plugin on server side
I1025 21:23:21.385000   96594 main.go:141] libmachine: Successfully made call to close driver server
I1025 21:23:21.385018   96594 main.go:141] libmachine: (functional-389152) DBG | Closing plugin on server side
I1025 21:23:21.385027   96594 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-389152 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-389152 image ls --format json --alsologtostderr:
[{"id":"10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.3"],"size":"122000000"},{"id":"6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.3"],"size":"60100000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-389152"],"size":"32900000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"ba30ce67d6c9ec71fdc0e67066ded62694d19b72177c9f4c761a7a8d01b77b92","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-389152"],"size":"30"},{"id":"73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"294000000"},{"id":"ea
d0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"53600000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"593aee2afb642798b83a85306d2625fd7f089c0a1242c7e75a237846d80aa2a0","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"187000000"},{"id":"3b85be0b10d389e268b35d4c04075b95c295dd24d595e8c5261e43ab94c47de4","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"581000000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags"
:["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.3"],"size":"126000000"},{"id":"bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.28.3"],"size":"73100000"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"744000"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"43800000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-389152 image ls --format json --alsologtostderr:
I1025 21:23:20.899775   96537 out.go:296] Setting OutFile to fd 1 ...
I1025 21:23:20.900101   96537 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1025 21:23:20.900138   96537 out.go:309] Setting ErrFile to fd 2...
I1025 21:23:20.900149   96537 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1025 21:23:20.900434   96537 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17488-80960/.minikube/bin
I1025 21:23:20.901034   96537 config.go:182] Loaded profile config "functional-389152": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
I1025 21:23:20.901156   96537 config.go:182] Loaded profile config "functional-389152": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
I1025 21:23:20.901528   96537 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1025 21:23:20.901585   96537 main.go:141] libmachine: Launching plugin server for driver kvm2
I1025 21:23:20.915340   96537 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37477
I1025 21:23:20.915753   96537 main.go:141] libmachine: () Calling .GetVersion
I1025 21:23:20.916513   96537 main.go:141] libmachine: Using API Version  1
I1025 21:23:20.916547   96537 main.go:141] libmachine: () Calling .SetConfigRaw
I1025 21:23:20.916907   96537 main.go:141] libmachine: () Calling .GetMachineName
I1025 21:23:20.917134   96537 main.go:141] libmachine: (functional-389152) Calling .GetState
I1025 21:23:20.919117   96537 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1025 21:23:20.919152   96537 main.go:141] libmachine: Launching plugin server for driver kvm2
I1025 21:23:20.932346   96537 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37549
I1025 21:23:20.932725   96537 main.go:141] libmachine: () Calling .GetVersion
I1025 21:23:20.933139   96537 main.go:141] libmachine: Using API Version  1
I1025 21:23:20.933161   96537 main.go:141] libmachine: () Calling .SetConfigRaw
I1025 21:23:20.933458   96537 main.go:141] libmachine: () Calling .GetMachineName
I1025 21:23:20.933621   96537 main.go:141] libmachine: (functional-389152) Calling .DriverName
I1025 21:23:20.933811   96537 ssh_runner.go:195] Run: systemctl --version
I1025 21:23:20.933838   96537 main.go:141] libmachine: (functional-389152) Calling .GetSSHHostname
I1025 21:23:20.937404   96537 main.go:141] libmachine: (functional-389152) DBG | domain functional-389152 has defined MAC address 52:54:00:93:5c:22 in network mk-functional-389152
I1025 21:23:20.937830   96537 main.go:141] libmachine: (functional-389152) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:5c:22", ip: ""} in network mk-functional-389152: {Iface:virbr1 ExpiryTime:2023-10-25 22:19:58 +0000 UTC Type:0 Mac:52:54:00:93:5c:22 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:functional-389152 Clientid:01:52:54:00:93:5c:22}
I1025 21:23:20.937872   96537 main.go:141] libmachine: (functional-389152) DBG | domain functional-389152 has defined IP address 192.168.39.102 and MAC address 52:54:00:93:5c:22 in network mk-functional-389152
I1025 21:23:20.937976   96537 main.go:141] libmachine: (functional-389152) Calling .GetSSHPort
I1025 21:23:20.938146   96537 main.go:141] libmachine: (functional-389152) Calling .GetSSHKeyPath
I1025 21:23:20.938365   96537 main.go:141] libmachine: (functional-389152) Calling .GetSSHUsername
I1025 21:23:20.938541   96537 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17488-80960/.minikube/machines/functional-389152/id_rsa Username:docker}
I1025 21:23:21.039666   96537 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I1025 21:23:21.098661   96537 main.go:141] libmachine: Making call to close driver server
I1025 21:23:21.098681   96537 main.go:141] libmachine: (functional-389152) Calling .Close
I1025 21:23:21.098984   96537 main.go:141] libmachine: (functional-389152) DBG | Closing plugin on server side
I1025 21:23:21.098990   96537 main.go:141] libmachine: Successfully made call to close driver server
I1025 21:23:21.099036   96537 main.go:141] libmachine: Making call to close connection to plugin binary
I1025 21:23:21.099050   96537 main.go:141] libmachine: Making call to close driver server
I1025 21:23:21.099063   96537 main.go:141] libmachine: (functional-389152) Calling .Close
I1025 21:23:21.099485   96537 main.go:141] libmachine: (functional-389152) DBG | Closing plugin on server side
I1025 21:23:21.099525   96537 main.go:141] libmachine: Successfully made call to close driver server
I1025 21:23:21.099550   96537 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-389152 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-389152 image ls --format yaml --alsologtostderr:
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: 53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.3
size: "126000000"
- id: ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "53600000"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "43800000"
- id: 593aee2afb642798b83a85306d2625fd7f089c0a1242c7e75a237846d80aa2a0
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "187000000"
- id: 10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.3
size: "122000000"
- id: 6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.3
size: "60100000"
- id: bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.28.3
size: "73100000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: ba30ce67d6c9ec71fdc0e67066ded62694d19b72177c9f4c761a7a8d01b77b92
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-389152
size: "30"
- id: 73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "294000000"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-389152
size: "32900000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: 3b85be0b10d389e268b35d4c04075b95c295dd24d595e8c5261e43ab94c47de4
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "581000000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-389152 image ls --format yaml --alsologtostderr:
I1025 21:23:20.602110   96495 out.go:296] Setting OutFile to fd 1 ...
I1025 21:23:20.602285   96495 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1025 21:23:20.602297   96495 out.go:309] Setting ErrFile to fd 2...
I1025 21:23:20.602305   96495 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1025 21:23:20.602632   96495 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17488-80960/.minikube/bin
I1025 21:23:20.603480   96495 config.go:182] Loaded profile config "functional-389152": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
I1025 21:23:20.603596   96495 config.go:182] Loaded profile config "functional-389152": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
I1025 21:23:20.603953   96495 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1025 21:23:20.603994   96495 main.go:141] libmachine: Launching plugin server for driver kvm2
I1025 21:23:20.622225   96495 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46315
I1025 21:23:20.622646   96495 main.go:141] libmachine: () Calling .GetVersion
I1025 21:23:20.623327   96495 main.go:141] libmachine: Using API Version  1
I1025 21:23:20.623361   96495 main.go:141] libmachine: () Calling .SetConfigRaw
I1025 21:23:20.623706   96495 main.go:141] libmachine: () Calling .GetMachineName
I1025 21:23:20.623905   96495 main.go:141] libmachine: (functional-389152) Calling .GetState
I1025 21:23:20.625978   96495 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1025 21:23:20.626024   96495 main.go:141] libmachine: Launching plugin server for driver kvm2
I1025 21:23:20.640382   96495 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40127
I1025 21:23:20.640835   96495 main.go:141] libmachine: () Calling .GetVersion
I1025 21:23:20.641281   96495 main.go:141] libmachine: Using API Version  1
I1025 21:23:20.641299   96495 main.go:141] libmachine: () Calling .SetConfigRaw
I1025 21:23:20.641620   96495 main.go:141] libmachine: () Calling .GetMachineName
I1025 21:23:20.641767   96495 main.go:141] libmachine: (functional-389152) Calling .DriverName
I1025 21:23:20.641940   96495 ssh_runner.go:195] Run: systemctl --version
I1025 21:23:20.641963   96495 main.go:141] libmachine: (functional-389152) Calling .GetSSHHostname
I1025 21:23:20.644911   96495 main.go:141] libmachine: (functional-389152) DBG | domain functional-389152 has defined MAC address 52:54:00:93:5c:22 in network mk-functional-389152
I1025 21:23:20.645266   96495 main.go:141] libmachine: (functional-389152) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:5c:22", ip: ""} in network mk-functional-389152: {Iface:virbr1 ExpiryTime:2023-10-25 22:19:58 +0000 UTC Type:0 Mac:52:54:00:93:5c:22 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:functional-389152 Clientid:01:52:54:00:93:5c:22}
I1025 21:23:20.645298   96495 main.go:141] libmachine: (functional-389152) DBG | domain functional-389152 has defined IP address 192.168.39.102 and MAC address 52:54:00:93:5c:22 in network mk-functional-389152
I1025 21:23:20.645533   96495 main.go:141] libmachine: (functional-389152) Calling .GetSSHPort
I1025 21:23:20.645664   96495 main.go:141] libmachine: (functional-389152) Calling .GetSSHKeyPath
I1025 21:23:20.645821   96495 main.go:141] libmachine: (functional-389152) Calling .GetSSHUsername
I1025 21:23:20.645919   96495 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17488-80960/.minikube/machines/functional-389152/id_rsa Username:docker}
I1025 21:23:20.759308   96495 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I1025 21:23:20.799016   96495 main.go:141] libmachine: Making call to close driver server
I1025 21:23:20.799027   96495 main.go:141] libmachine: (functional-389152) Calling .Close
I1025 21:23:20.799387   96495 main.go:141] libmachine: Successfully made call to close driver server
I1025 21:23:20.799414   96495 main.go:141] libmachine: Making call to close connection to plugin binary
I1025 21:23:20.799426   96495 main.go:141] libmachine: Making call to close driver server
I1025 21:23:20.799436   96495 main.go:141] libmachine: (functional-389152) Calling .Close
I1025 21:23:20.799700   96495 main.go:141] libmachine: (functional-389152) DBG | Closing plugin on server side
I1025 21:23:20.799702   96495 main.go:141] libmachine: Successfully made call to close driver server
I1025 21:23:20.799730   96495 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (5.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-389152 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-389152 ssh pgrep buildkitd: exit status 1 (227.574032ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-389152 image build -t localhost/my-image:functional-389152 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-389152 image build -t localhost/my-image:functional-389152 testdata/build --alsologtostderr: (4.599729478s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-389152 image build -t localhost/my-image:functional-389152 testdata/build --alsologtostderr:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
5cc84ad355aa: Pulling fs layer
5cc84ad355aa: Verifying Checksum
5cc84ad355aa: Download complete
5cc84ad355aa: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> beae173ccac6
Step 2/3 : RUN true
---> Running in 176756523ea8
Removing intermediate container 176756523ea8
---> 18049853e780
Step 3/3 : ADD content.txt /
---> 334cc5e9a5e0
Successfully built 334cc5e9a5e0
Successfully tagged localhost/my-image:functional-389152
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-389152 image build -t localhost/my-image:functional-389152 testdata/build --alsologtostderr:
I1025 21:23:21.103690   96581 out.go:296] Setting OutFile to fd 1 ...
I1025 21:23:21.103849   96581 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1025 21:23:21.103862   96581 out.go:309] Setting ErrFile to fd 2...
I1025 21:23:21.103869   96581 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1025 21:23:21.104124   96581 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17488-80960/.minikube/bin
I1025 21:23:21.104795   96581 config.go:182] Loaded profile config "functional-389152": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
I1025 21:23:21.105585   96581 config.go:182] Loaded profile config "functional-389152": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
I1025 21:23:21.106095   96581 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1025 21:23:21.106199   96581 main.go:141] libmachine: Launching plugin server for driver kvm2
I1025 21:23:21.121242   96581 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41851
I1025 21:23:21.121781   96581 main.go:141] libmachine: () Calling .GetVersion
I1025 21:23:21.122400   96581 main.go:141] libmachine: Using API Version  1
I1025 21:23:21.122429   96581 main.go:141] libmachine: () Calling .SetConfigRaw
I1025 21:23:21.122745   96581 main.go:141] libmachine: () Calling .GetMachineName
I1025 21:23:21.122974   96581 main.go:141] libmachine: (functional-389152) Calling .GetState
I1025 21:23:21.125007   96581 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1025 21:23:21.125053   96581 main.go:141] libmachine: Launching plugin server for driver kvm2
I1025 21:23:21.140975   96581 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33331
I1025 21:23:21.141482   96581 main.go:141] libmachine: () Calling .GetVersion
I1025 21:23:21.142037   96581 main.go:141] libmachine: Using API Version  1
I1025 21:23:21.142068   96581 main.go:141] libmachine: () Calling .SetConfigRaw
I1025 21:23:21.142395   96581 main.go:141] libmachine: () Calling .GetMachineName
I1025 21:23:21.142594   96581 main.go:141] libmachine: (functional-389152) Calling .DriverName
I1025 21:23:21.142886   96581 ssh_runner.go:195] Run: systemctl --version
I1025 21:23:21.142919   96581 main.go:141] libmachine: (functional-389152) Calling .GetSSHHostname
I1025 21:23:21.145629   96581 main.go:141] libmachine: (functional-389152) DBG | domain functional-389152 has defined MAC address 52:54:00:93:5c:22 in network mk-functional-389152
I1025 21:23:21.146131   96581 main.go:141] libmachine: (functional-389152) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:93:5c:22", ip: ""} in network mk-functional-389152: {Iface:virbr1 ExpiryTime:2023-10-25 22:19:58 +0000 UTC Type:0 Mac:52:54:00:93:5c:22 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:functional-389152 Clientid:01:52:54:00:93:5c:22}
I1025 21:23:21.146150   96581 main.go:141] libmachine: (functional-389152) DBG | domain functional-389152 has defined IP address 192.168.39.102 and MAC address 52:54:00:93:5c:22 in network mk-functional-389152
I1025 21:23:21.146420   96581 main.go:141] libmachine: (functional-389152) Calling .GetSSHPort
I1025 21:23:21.146633   96581 main.go:141] libmachine: (functional-389152) Calling .GetSSHKeyPath
I1025 21:23:21.146814   96581 main.go:141] libmachine: (functional-389152) Calling .GetSSHUsername
I1025 21:23:21.146967   96581 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17488-80960/.minikube/machines/functional-389152/id_rsa Username:docker}
I1025 21:23:21.255316   96581 build_images.go:151] Building image from path: /tmp/build.726971427.tar
I1025 21:23:21.255387   96581 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1025 21:23:21.272690   96581 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.726971427.tar
I1025 21:23:21.289192   96581 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.726971427.tar: stat -c "%s %y" /var/lib/minikube/build/build.726971427.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.726971427.tar': No such file or directory
I1025 21:23:21.289230   96581 ssh_runner.go:362] scp /tmp/build.726971427.tar --> /var/lib/minikube/build/build.726971427.tar (3072 bytes)
I1025 21:23:21.339094   96581 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.726971427
I1025 21:23:21.366048   96581 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.726971427 -xf /var/lib/minikube/build/build.726971427.tar
I1025 21:23:21.377454   96581 docker.go:341] Building image: /var/lib/minikube/build/build.726971427
I1025 21:23:21.377551   96581 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-389152 /var/lib/minikube/build/build.726971427
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
Install the buildx component to build images with BuildKit:
https://docs.docker.com/go/buildx/

                                                
                                                
I1025 21:23:25.605640   96581 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-389152 /var/lib/minikube/build/build.726971427: (4.228047372s)
I1025 21:23:25.605735   96581 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.726971427
I1025 21:23:25.617987   96581 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.726971427.tar
I1025 21:23:25.627870   96581 build_images.go:207] Built localhost/my-image:functional-389152 from /tmp/build.726971427.tar
I1025 21:23:25.627900   96581 build_images.go:123] succeeded building to: functional-389152
I1025 21:23:25.627905   96581 build_images.go:124] failed building to: 
I1025 21:23:25.627928   96581 main.go:141] libmachine: Making call to close driver server
I1025 21:23:25.627944   96581 main.go:141] libmachine: (functional-389152) Calling .Close
I1025 21:23:25.628266   96581 main.go:141] libmachine: Successfully made call to close driver server
I1025 21:23:25.628287   96581 main.go:141] libmachine: Making call to close connection to plugin binary
I1025 21:23:25.628297   96581 main.go:141] libmachine: Making call to close driver server
I1025 21:23:25.628308   96581 main.go:141] libmachine: (functional-389152) Calling .Close
I1025 21:23:25.628315   96581 main.go:141] libmachine: (functional-389152) DBG | Closing plugin on server side
I1025 21:23:25.628575   96581 main.go:141] libmachine: (functional-389152) DBG | Closing plugin on server side
I1025 21:23:25.628638   96581 main.go:141] libmachine: Successfully made call to close driver server
I1025 21:23:25.628659   96581 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-389152 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (5.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (3.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (3.380964721s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-389152
--- PASS: TestFunctional/parallel/ImageCommands/Setup (3.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-389152 image load --daemon gcr.io/google-containers/addon-resizer:functional-389152 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-389152 image load --daemon gcr.io/google-containers/addon-resizer:functional-389152 --alsologtostderr: (4.058821554s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-389152 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-389152 image load --daemon gcr.io/google-containers/addon-resizer:functional-389152 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-389152 image load --daemon gcr.io/google-containers/addon-resizer:functional-389152 --alsologtostderr: (2.203959162s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-389152 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.46s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (8.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (3.01845768s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-389152
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-389152 image load --daemon gcr.io/google-containers/addon-resizer:functional-389152 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-389152 image load --daemon gcr.io/google-containers/addon-resizer:functional-389152 --alsologtostderr: (4.94648042s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-389152 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (8.25s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-linux-amd64 -p functional-389152 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-linux-amd64 -p functional-389152 service list -o json
functional_test.go:1493: Took "355.867045ms" to run "out/minikube-linux-amd64 -p functional-389152 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-linux-amd64 -p functional-389152 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.39.102:32500
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-389152 docker-env) && out/minikube-linux-amd64 status -p functional-389152"
functional_test.go:518: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-389152 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.01s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-linux-amd64 -p functional-389152 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-linux-amd64 -p functional-389152 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.39.102:32500
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-389152 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-389152 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-389152 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1314: Took "285.05572ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1328: Took "69.116542ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1365: Took "300.394151ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1378: Took "64.033949ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (33.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-389152 /tmp/TestFunctionalparallelMountCmdany-port568368994/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1698268961217192027" to /tmp/TestFunctionalparallelMountCmdany-port568368994/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1698268961217192027" to /tmp/TestFunctionalparallelMountCmdany-port568368994/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1698268961217192027" to /tmp/TestFunctionalparallelMountCmdany-port568368994/001/test-1698268961217192027
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-389152 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-389152 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (248.792146ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-389152 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-389152 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct 25 21:22 created-by-test
-rw-r--r-- 1 docker docker 24 Oct 25 21:22 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct 25 21:22 test-1698268961217192027
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-389152 ssh cat /mount-9p/test-1698268961217192027
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-389152 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [7fd78719-8730-42f8-bc5f-2736ad6148f2] Pending
helpers_test.go:344: "busybox-mount" [7fd78719-8730-42f8-bc5f-2736ad6148f2] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [7fd78719-8730-42f8-bc5f-2736ad6148f2] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [7fd78719-8730-42f8-bc5f-2736ad6148f2] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 31.016691943s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-389152 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-389152 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-389152 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-389152 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-389152 /tmp/TestFunctionalparallelMountCmdany-port568368994/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (33.82s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-389152 image save gcr.io/google-containers/addon-resizer:functional-389152 /home/jenkins/workspace/KVM_Linux_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-389152 image save gcr.io/google-containers/addon-resizer:functional-389152 /home/jenkins/workspace/KVM_Linux_integration/addon-resizer-save.tar --alsologtostderr: (1.466516647s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-389152 image rm gcr.io/google-containers/addon-resizer:functional-389152 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-389152 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-389152 image load /home/jenkins/workspace/KVM_Linux_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-389152 image load /home/jenkins/workspace/KVM_Linux_integration/addon-resizer-save.tar --alsologtostderr: (1.99836611s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-389152 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-389152
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-389152 image save --daemon gcr.io/google-containers/addon-resizer:functional-389152 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-amd64 -p functional-389152 image save --daemon gcr.io/google-containers/addon-resizer:functional-389152 --alsologtostderr: (1.301342438s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-389152
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.34s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-389152 /tmp/TestFunctionalparallelMountCmdspecific-port3208458899/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-389152 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-389152 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (225.755478ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-389152 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-389152 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-389152 /tmp/TestFunctionalparallelMountCmdspecific-port3208458899/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-389152 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-389152 ssh "sudo umount -f /mount-9p": exit status 1 (207.846674ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-389152 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-389152 /tmp/TestFunctionalparallelMountCmdspecific-port3208458899/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.59s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-389152 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1967012494/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-389152 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1967012494/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-389152 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1967012494/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-389152 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-389152 ssh "findmnt -T" /mount1: exit status 1 (276.895503ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-389152 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-389152 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-389152 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-389152 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-389152 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1967012494/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-389152 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1967012494/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-389152 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1967012494/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.69s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-389152
--- PASS: TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-389152
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-389152
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestGvisorAddon (348.03s)

                                                
                                                
=== RUN   TestGvisorAddon
=== PAUSE TestGvisorAddon

                                                
                                                

                                                
                                                
=== CONT  TestGvisorAddon
gvisor_addon_test.go:52: (dbg) Run:  out/minikube-linux-amd64 start -p gvisor-342758 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 
gvisor_addon_test.go:52: (dbg) Done: out/minikube-linux-amd64 start -p gvisor-342758 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 : (1m53.218938132s)
gvisor_addon_test.go:58: (dbg) Run:  out/minikube-linux-amd64 -p gvisor-342758 cache add gcr.io/k8s-minikube/gvisor-addon:2
E1025 21:58:15.639881   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/skaffold-183899/client.crt: no such file or directory
E1025 21:58:15.645208   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/skaffold-183899/client.crt: no such file or directory
E1025 21:58:15.655658   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/skaffold-183899/client.crt: no such file or directory
E1025 21:58:15.676180   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/skaffold-183899/client.crt: no such file or directory
E1025 21:58:15.716658   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/skaffold-183899/client.crt: no such file or directory
E1025 21:58:15.797028   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/skaffold-183899/client.crt: no such file or directory
E1025 21:58:15.957489   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/skaffold-183899/client.crt: no such file or directory
E1025 21:58:16.278121   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/skaffold-183899/client.crt: no such file or directory
E1025 21:58:16.919264   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/skaffold-183899/client.crt: no such file or directory
gvisor_addon_test.go:58: (dbg) Done: out/minikube-linux-amd64 -p gvisor-342758 cache add gcr.io/k8s-minikube/gvisor-addon:2: (22.051559317s)
gvisor_addon_test.go:63: (dbg) Run:  out/minikube-linux-amd64 -p gvisor-342758 addons enable gvisor
E1025 21:58:36.122159   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/skaffold-183899/client.crt: no such file or directory
gvisor_addon_test.go:63: (dbg) Done: out/minikube-linux-amd64 -p gvisor-342758 addons enable gvisor: (5.860185511s)
gvisor_addon_test.go:68: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "kubernetes.io/minikube-addons=gvisor" in namespace "kube-system" ...
helpers_test.go:344: "gvisor" [57eed33e-df69-4427-9103-8b050ba96234] Running
gvisor_addon_test.go:68: (dbg) TestGvisorAddon: kubernetes.io/minikube-addons=gvisor healthy within 5.021812658s
gvisor_addon_test.go:73: (dbg) Run:  kubectl --context gvisor-342758 replace --force -f testdata/nginx-gvisor.yaml
gvisor_addon_test.go:78: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "run=nginx,runtime=gvisor" in namespace "default" ...
helpers_test.go:344: "nginx-gvisor" [40969594-8921-4191-9a6d-16c06916a22d] Pending
helpers_test.go:344: "nginx-gvisor" [40969594-8921-4191-9a6d-16c06916a22d] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
E1025 21:58:56.603276   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/skaffold-183899/client.crt: no such file or directory
helpers_test.go:344: "nginx-gvisor" [40969594-8921-4191-9a6d-16c06916a22d] Running
gvisor_addon_test.go:78: (dbg) TestGvisorAddon: run=nginx,runtime=gvisor healthy within 55.01549115s
gvisor_addon_test.go:83: (dbg) Run:  out/minikube-linux-amd64 stop -p gvisor-342758
E1025 21:59:37.564255   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/skaffold-183899/client.crt: no such file or directory
gvisor_addon_test.go:83: (dbg) Done: out/minikube-linux-amd64 stop -p gvisor-342758: (1m31.969904969s)
gvisor_addon_test.go:88: (dbg) Run:  out/minikube-linux-amd64 start -p gvisor-342758 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 
gvisor_addon_test.go:88: (dbg) Done: out/minikube-linux-amd64 start -p gvisor-342758 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 : (43.384286857s)
gvisor_addon_test.go:92: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "kubernetes.io/minikube-addons=gvisor" in namespace "kube-system" ...
helpers_test.go:344: "gvisor" [57eed33e-df69-4427-9103-8b050ba96234] Running
gvisor_addon_test.go:92: (dbg) TestGvisorAddon: kubernetes.io/minikube-addons=gvisor healthy within 5.02443323s
gvisor_addon_test.go:95: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "run=nginx,runtime=gvisor" in namespace "default" ...
helpers_test.go:344: "nginx-gvisor" [40969594-8921-4191-9a6d-16c06916a22d] Running / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
gvisor_addon_test.go:95: (dbg) TestGvisorAddon: run=nginx,runtime=gvisor healthy within 5.012970987s
helpers_test.go:175: Cleaning up "gvisor-342758" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p gvisor-342758
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p gvisor-342758: (1.194046101s)
--- PASS: TestGvisorAddon (348.03s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (52.44s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -p image-905062 --driver=kvm2 
E1025 21:24:27.548176   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/addons-245571/client.crt: no such file or directory
image_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -p image-905062 --driver=kvm2 : (52.444083654s)
--- PASS: TestImageBuild/serial/Setup (52.44s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (3.43s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-905062
image_test.go:78: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-905062: (3.427243708s)
--- PASS: TestImageBuild/serial/NormalBuild (3.43s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (1.29s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-905062
image_test.go:99: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-905062: (1.289502466s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (1.29s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.39s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-905062
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.39s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.29s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-905062
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.29s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (142.24s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-106045 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2 
E1025 21:26:43.705026   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/addons-245571/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-106045 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2 : (2m22.235398902s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (142.24s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (17.56s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-106045 addons enable ingress --alsologtostderr -v=5
E1025 21:27:11.388851   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/addons-245571/client.crt: no such file or directory
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-106045 addons enable ingress --alsologtostderr -v=5: (17.557607381s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (17.56s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.56s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-106045 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.56s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (41.57s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:206: (dbg) Run:  kubectl --context ingress-addon-legacy-106045 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
E1025 21:27:25.191807   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/functional-389152/client.crt: no such file or directory
E1025 21:27:25.197176   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/functional-389152/client.crt: no such file or directory
E1025 21:27:25.207445   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/functional-389152/client.crt: no such file or directory
E1025 21:27:25.227699   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/functional-389152/client.crt: no such file or directory
E1025 21:27:25.268043   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/functional-389152/client.crt: no such file or directory
E1025 21:27:25.348394   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/functional-389152/client.crt: no such file or directory
E1025 21:27:25.508875   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/functional-389152/client.crt: no such file or directory
E1025 21:27:25.829553   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/functional-389152/client.crt: no such file or directory
E1025 21:27:26.470627   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/functional-389152/client.crt: no such file or directory
addons_test.go:206: (dbg) Done: kubectl --context ingress-addon-legacy-106045 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (11.242861994s)
addons_test.go:231: (dbg) Run:  kubectl --context ingress-addon-legacy-106045 replace --force -f testdata/nginx-ingress-v1beta1.yaml
E1025 21:27:27.751332   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/functional-389152/client.crt: no such file or directory
addons_test.go:244: (dbg) Run:  kubectl --context ingress-addon-legacy-106045 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:249: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [65e9f961-c9ef-4c50-905d-85f63ed277ce] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
E1025 21:27:30.311936   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/functional-389152/client.crt: no such file or directory
helpers_test.go:344: "nginx" [65e9f961-c9ef-4c50-905d-85f63ed277ce] Running
E1025 21:27:35.432349   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/functional-389152/client.crt: no such file or directory
addons_test.go:249: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 13.014774129s
addons_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-106045 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:285: (dbg) Run:  kubectl --context ingress-addon-legacy-106045 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-106045 ip
addons_test.go:296: (dbg) Run:  nslookup hello-john.test 192.168.39.32
addons_test.go:305: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-106045 addons disable ingress-dns --alsologtostderr -v=1
E1025 21:27:45.673448   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/functional-389152/client.crt: no such file or directory
addons_test.go:305: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-106045 addons disable ingress-dns --alsologtostderr -v=1: (8.610073252s)
addons_test.go:310: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-106045 addons disable ingress --alsologtostderr -v=1
addons_test.go:310: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-106045 addons disable ingress --alsologtostderr -v=1: (7.512737179s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddons (41.57s)

                                                
                                    
x
+
TestJSONOutput/start/Command (69.87s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-400090 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2 
E1025 21:28:06.154554   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/functional-389152/client.crt: no such file or directory
E1025 21:28:47.114777   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/functional-389152/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-400090 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2 : (1m9.87071812s)
--- PASS: TestJSONOutput/start/Command (69.87s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.58s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-400090 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.58s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.53s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-400090 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.53s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (13.12s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-400090 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-400090 --output=json --user=testUser: (13.11563728s)
--- PASS: TestJSONOutput/stop/Command (13.12s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-959205 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-959205 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (79.401546ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"5432da05-5a03-4c64-b132-35b11b0f772f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-959205] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"aebb2775-c7f8-438b-b1bb-8dbc065b48e1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17488"}}
	{"specversion":"1.0","id":"42bea3d9-4bf6-4ff4-9ca0-93fd206d5621","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"faa5875c-1702-4bd6-ae59-5d18a0b831a5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17488-80960/kubeconfig"}}
	{"specversion":"1.0","id":"a9b7a2f9-b123-4b0a-ada8-0ef857b52f33","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17488-80960/.minikube"}}
	{"specversion":"1.0","id":"39ccc5f1-5ac5-4e17-bf51-eb439f502eb9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"247223a1-70e6-441c-a9ee-07a184c202ce","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"13923061-9e93-4fe3-94f4-53f6faf8dee8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-959205" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-959205
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (105.55s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-435414 --driver=kvm2 
E1025 21:30:09.036006   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/functional-389152/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-435414 --driver=kvm2 : (50.436715399s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-438151 --driver=kvm2 
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-438151 --driver=kvm2 : (52.44177509s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-435414
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-438151
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-438151" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-438151
helpers_test.go:175: Cleaning up "first-435414" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-435414
--- PASS: TestMinikubeProfile (105.55s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (34.74s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-197411 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2 
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-197411 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2 : (33.737168227s)
E1025 21:31:43.704740   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/addons-245571/client.crt: no such file or directory
--- PASS: TestMountStart/serial/StartWithMountFirst (34.74s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.41s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-197411 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-197411 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.41s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (36.05s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-212859 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2 
E1025 21:32:16.250286   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/ingress-addon-legacy-106045/client.crt: no such file or directory
E1025 21:32:16.255630   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/ingress-addon-legacy-106045/client.crt: no such file or directory
E1025 21:32:16.265975   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/ingress-addon-legacy-106045/client.crt: no such file or directory
E1025 21:32:16.286312   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/ingress-addon-legacy-106045/client.crt: no such file or directory
E1025 21:32:16.326637   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/ingress-addon-legacy-106045/client.crt: no such file or directory
E1025 21:32:16.406974   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/ingress-addon-legacy-106045/client.crt: no such file or directory
E1025 21:32:16.567390   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/ingress-addon-legacy-106045/client.crt: no such file or directory
E1025 21:32:16.888010   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/ingress-addon-legacy-106045/client.crt: no such file or directory
E1025 21:32:17.528187   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/ingress-addon-legacy-106045/client.crt: no such file or directory
E1025 21:32:18.808697   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/ingress-addon-legacy-106045/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-212859 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2 : (35.052703489s)
--- PASS: TestMountStart/serial/StartWithMountSecond (36.05s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-212859 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-212859 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.40s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.67s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-197411 --alsologtostderr -v=5
E1025 21:32:21.369786   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/ingress-addon-legacy-106045/client.crt: no such file or directory
--- PASS: TestMountStart/serial/DeleteFirst (0.67s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-212859 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-212859 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.40s)

                                                
                                    
x
+
TestMountStart/serial/Stop (2.11s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-212859
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-212859: (2.105451921s)
--- PASS: TestMountStart/serial/Stop (2.11s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (27.87s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-212859
E1025 21:32:25.192306   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/functional-389152/client.crt: no such file or directory
E1025 21:32:26.490846   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/ingress-addon-legacy-106045/client.crt: no such file or directory
E1025 21:32:36.731768   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/ingress-addon-legacy-106045/client.crt: no such file or directory
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-212859: (26.872831001s)
--- PASS: TestMountStart/serial/RestartStopped (27.87s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.41s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-212859 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-212859 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.41s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (214.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-086357 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2 
E1025 21:32:57.212752   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/ingress-addon-legacy-106045/client.crt: no such file or directory
E1025 21:33:38.173470   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/ingress-addon-legacy-106045/client.crt: no such file or directory
E1025 21:35:00.096055   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/ingress-addon-legacy-106045/client.crt: no such file or directory
multinode_test.go:85: (dbg) Done: out/minikube-linux-amd64 start -p multinode-086357 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2 : (3m33.862603423s)
multinode_test.go:91: (dbg) Run:  out/minikube-linux-amd64 -p multinode-086357 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (214.28s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:481: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-086357 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:486: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-086357 -- rollout status deployment/busybox
multinode_test.go:486: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-086357 -- rollout status deployment/busybox: (4.560551149s)
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-086357 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:516: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-086357 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:524: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-086357 -- exec busybox-5bc68d56bd-rf25q -- nslookup kubernetes.io
multinode_test.go:524: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-086357 -- exec busybox-5bc68d56bd-wsl8f -- nslookup kubernetes.io
multinode_test.go:534: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-086357 -- exec busybox-5bc68d56bd-rf25q -- nslookup kubernetes.default
multinode_test.go:534: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-086357 -- exec busybox-5bc68d56bd-wsl8f -- nslookup kubernetes.default
multinode_test.go:542: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-086357 -- exec busybox-5bc68d56bd-rf25q -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:542: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-086357 -- exec busybox-5bc68d56bd-wsl8f -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.47s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-086357 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:560: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-086357 -- exec busybox-5bc68d56bd-rf25q -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-086357 -- exec busybox-5bc68d56bd-rf25q -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:560: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-086357 -- exec busybox-5bc68d56bd-wsl8f -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-086357 -- exec busybox-5bc68d56bd-wsl8f -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.95s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (51.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:110: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-086357 -v 3 --alsologtostderr
E1025 21:36:43.704523   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/addons-245571/client.crt: no such file or directory
E1025 21:37:16.250309   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/ingress-addon-legacy-106045/client.crt: no such file or directory
E1025 21:37:25.192059   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/functional-389152/client.crt: no such file or directory
multinode_test.go:110: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-086357 -v 3 --alsologtostderr: (50.794421512s)
multinode_test.go:116: (dbg) Run:  out/minikube-linux-amd64 -p multinode-086357 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (51.37s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:132: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.22s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:173: (dbg) Run:  out/minikube-linux-amd64 -p multinode-086357 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-086357 cp testdata/cp-test.txt multinode-086357:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-086357 ssh -n multinode-086357 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-086357 cp multinode-086357:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1205141104/001/cp-test_multinode-086357.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-086357 ssh -n multinode-086357 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-086357 cp multinode-086357:/home/docker/cp-test.txt multinode-086357-m02:/home/docker/cp-test_multinode-086357_multinode-086357-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-086357 ssh -n multinode-086357 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-086357 ssh -n multinode-086357-m02 "sudo cat /home/docker/cp-test_multinode-086357_multinode-086357-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-086357 cp multinode-086357:/home/docker/cp-test.txt multinode-086357-m03:/home/docker/cp-test_multinode-086357_multinode-086357-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-086357 ssh -n multinode-086357 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-086357 ssh -n multinode-086357-m03 "sudo cat /home/docker/cp-test_multinode-086357_multinode-086357-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-086357 cp testdata/cp-test.txt multinode-086357-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-086357 ssh -n multinode-086357-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-086357 cp multinode-086357-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1205141104/001/cp-test_multinode-086357-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-086357 ssh -n multinode-086357-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-086357 cp multinode-086357-m02:/home/docker/cp-test.txt multinode-086357:/home/docker/cp-test_multinode-086357-m02_multinode-086357.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-086357 ssh -n multinode-086357-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-086357 ssh -n multinode-086357 "sudo cat /home/docker/cp-test_multinode-086357-m02_multinode-086357.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-086357 cp multinode-086357-m02:/home/docker/cp-test.txt multinode-086357-m03:/home/docker/cp-test_multinode-086357-m02_multinode-086357-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-086357 ssh -n multinode-086357-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-086357 ssh -n multinode-086357-m03 "sudo cat /home/docker/cp-test_multinode-086357-m02_multinode-086357-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-086357 cp testdata/cp-test.txt multinode-086357-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-086357 ssh -n multinode-086357-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-086357 cp multinode-086357-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1205141104/001/cp-test_multinode-086357-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-086357 ssh -n multinode-086357-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-086357 cp multinode-086357-m03:/home/docker/cp-test.txt multinode-086357:/home/docker/cp-test_multinode-086357-m03_multinode-086357.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-086357 ssh -n multinode-086357-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-086357 ssh -n multinode-086357 "sudo cat /home/docker/cp-test_multinode-086357-m03_multinode-086357.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-086357 cp multinode-086357-m03:/home/docker/cp-test.txt multinode-086357-m02:/home/docker/cp-test_multinode-086357-m03_multinode-086357-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-086357 ssh -n multinode-086357-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-086357 ssh -n multinode-086357-m02 "sudo cat /home/docker/cp-test_multinode-086357-m03_multinode-086357-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.66s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (4.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:210: (dbg) Run:  out/minikube-linux-amd64 -p multinode-086357 node stop m03
multinode_test.go:210: (dbg) Done: out/minikube-linux-amd64 -p multinode-086357 node stop m03: (3.097425809s)
multinode_test.go:216: (dbg) Run:  out/minikube-linux-amd64 -p multinode-086357 status
multinode_test.go:216: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-086357 status: exit status 7 (461.454202ms)

                                                
                                                
-- stdout --
	multinode-086357
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-086357-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-086357-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:223: (dbg) Run:  out/minikube-linux-amd64 -p multinode-086357 status --alsologtostderr
multinode_test.go:223: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-086357 status --alsologtostderr: exit status 7 (455.674381ms)

                                                
                                                
-- stdout --
	multinode-086357
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-086357-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-086357-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 21:37:37.879585  104046 out.go:296] Setting OutFile to fd 1 ...
	I1025 21:37:37.879742  104046 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 21:37:37.879754  104046 out.go:309] Setting ErrFile to fd 2...
	I1025 21:37:37.879761  104046 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 21:37:37.880032  104046 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17488-80960/.minikube/bin
	I1025 21:37:37.880290  104046 out.go:303] Setting JSON to false
	I1025 21:37:37.880338  104046 mustload.go:65] Loading cluster: multinode-086357
	I1025 21:37:37.880446  104046 notify.go:220] Checking for updates...
	I1025 21:37:37.880805  104046 config.go:182] Loaded profile config "multinode-086357": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1025 21:37:37.880822  104046 status.go:255] checking status of multinode-086357 ...
	I1025 21:37:37.881331  104046 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1025 21:37:37.881420  104046 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 21:37:37.896044  104046 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34999
	I1025 21:37:37.896483  104046 main.go:141] libmachine: () Calling .GetVersion
	I1025 21:37:37.896992  104046 main.go:141] libmachine: Using API Version  1
	I1025 21:37:37.897021  104046 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 21:37:37.897486  104046 main.go:141] libmachine: () Calling .GetMachineName
	I1025 21:37:37.897677  104046 main.go:141] libmachine: (multinode-086357) Calling .GetState
	I1025 21:37:37.899213  104046 status.go:330] multinode-086357 host status = "Running" (err=<nil>)
	I1025 21:37:37.899229  104046 host.go:66] Checking if "multinode-086357" exists ...
	I1025 21:37:37.899509  104046 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1025 21:37:37.899546  104046 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 21:37:37.915649  104046 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34725
	I1025 21:37:37.916079  104046 main.go:141] libmachine: () Calling .GetVersion
	I1025 21:37:37.916542  104046 main.go:141] libmachine: Using API Version  1
	I1025 21:37:37.916563  104046 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 21:37:37.916942  104046 main.go:141] libmachine: () Calling .GetMachineName
	I1025 21:37:37.917154  104046 main.go:141] libmachine: (multinode-086357) Calling .GetIP
	I1025 21:37:37.919867  104046 main.go:141] libmachine: (multinode-086357) DBG | domain multinode-086357 has defined MAC address 52:54:00:dd:ce:ec in network mk-multinode-086357
	I1025 21:37:37.920348  104046 main.go:141] libmachine: (multinode-086357) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:ce:ec", ip: ""} in network mk-multinode-086357: {Iface:virbr1 ExpiryTime:2023-10-25 22:33:09 +0000 UTC Type:0 Mac:52:54:00:dd:ce:ec Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:multinode-086357 Clientid:01:52:54:00:dd:ce:ec}
	I1025 21:37:37.920398  104046 main.go:141] libmachine: (multinode-086357) DBG | domain multinode-086357 has defined IP address 192.168.39.56 and MAC address 52:54:00:dd:ce:ec in network mk-multinode-086357
	I1025 21:37:37.920499  104046 host.go:66] Checking if "multinode-086357" exists ...
	I1025 21:37:37.920775  104046 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1025 21:37:37.920810  104046 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 21:37:37.936179  104046 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43371
	I1025 21:37:37.936638  104046 main.go:141] libmachine: () Calling .GetVersion
	I1025 21:37:37.937187  104046 main.go:141] libmachine: Using API Version  1
	I1025 21:37:37.937217  104046 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 21:37:37.937602  104046 main.go:141] libmachine: () Calling .GetMachineName
	I1025 21:37:37.937833  104046 main.go:141] libmachine: (multinode-086357) Calling .DriverName
	I1025 21:37:37.938066  104046 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 21:37:37.938095  104046 main.go:141] libmachine: (multinode-086357) Calling .GetSSHHostname
	I1025 21:37:37.941396  104046 main.go:141] libmachine: (multinode-086357) DBG | domain multinode-086357 has defined MAC address 52:54:00:dd:ce:ec in network mk-multinode-086357
	I1025 21:37:37.941922  104046 main.go:141] libmachine: (multinode-086357) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:ce:ec", ip: ""} in network mk-multinode-086357: {Iface:virbr1 ExpiryTime:2023-10-25 22:33:09 +0000 UTC Type:0 Mac:52:54:00:dd:ce:ec Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:multinode-086357 Clientid:01:52:54:00:dd:ce:ec}
	I1025 21:37:37.941964  104046 main.go:141] libmachine: (multinode-086357) DBG | domain multinode-086357 has defined IP address 192.168.39.56 and MAC address 52:54:00:dd:ce:ec in network mk-multinode-086357
	I1025 21:37:37.942219  104046 main.go:141] libmachine: (multinode-086357) Calling .GetSSHPort
	I1025 21:37:37.942386  104046 main.go:141] libmachine: (multinode-086357) Calling .GetSSHKeyPath
	I1025 21:37:37.942557  104046 main.go:141] libmachine: (multinode-086357) Calling .GetSSHUsername
	I1025 21:37:37.942731  104046 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17488-80960/.minikube/machines/multinode-086357/id_rsa Username:docker}
	I1025 21:37:38.038341  104046 ssh_runner.go:195] Run: systemctl --version
	I1025 21:37:38.044649  104046 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 21:37:38.060135  104046 kubeconfig.go:92] found "multinode-086357" server: "https://192.168.39.56:8443"
	I1025 21:37:38.060163  104046 api_server.go:166] Checking apiserver status ...
	I1025 21:37:38.060200  104046 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 21:37:38.074169  104046 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1877/cgroup
	I1025 21:37:38.085338  104046 api_server.go:182] apiserver freezer: "4:freezer:/kubepods/burstable/pod8b4d609780db4a1dcd484f9a775059c5/a9494e8ed59ae20e4c5ea4ac41306b959d3b7dc78e3642bc82652c413303cb7b"
	I1025 21:37:38.085394  104046 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod8b4d609780db4a1dcd484f9a775059c5/a9494e8ed59ae20e4c5ea4ac41306b959d3b7dc78e3642bc82652c413303cb7b/freezer.state
	I1025 21:37:38.096991  104046 api_server.go:204] freezer state: "THAWED"
	I1025 21:37:38.097017  104046 api_server.go:253] Checking apiserver healthz at https://192.168.39.56:8443/healthz ...
	I1025 21:37:38.101856  104046 api_server.go:279] https://192.168.39.56:8443/healthz returned 200:
	ok
	I1025 21:37:38.101881  104046 status.go:421] multinode-086357 apiserver status = Running (err=<nil>)
	I1025 21:37:38.101894  104046 status.go:257] multinode-086357 status: &{Name:multinode-086357 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1025 21:37:38.101914  104046 status.go:255] checking status of multinode-086357-m02 ...
	I1025 21:37:38.102292  104046 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1025 21:37:38.102343  104046 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 21:37:38.116828  104046 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45219
	I1025 21:37:38.117265  104046 main.go:141] libmachine: () Calling .GetVersion
	I1025 21:37:38.117733  104046 main.go:141] libmachine: Using API Version  1
	I1025 21:37:38.117756  104046 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 21:37:38.118098  104046 main.go:141] libmachine: () Calling .GetMachineName
	I1025 21:37:38.118277  104046 main.go:141] libmachine: (multinode-086357-m02) Calling .GetState
	I1025 21:37:38.119795  104046 status.go:330] multinode-086357-m02 host status = "Running" (err=<nil>)
	I1025 21:37:38.119814  104046 host.go:66] Checking if "multinode-086357-m02" exists ...
	I1025 21:37:38.120106  104046 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1025 21:37:38.120150  104046 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 21:37:38.134395  104046 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38815
	I1025 21:37:38.134758  104046 main.go:141] libmachine: () Calling .GetVersion
	I1025 21:37:38.135169  104046 main.go:141] libmachine: Using API Version  1
	I1025 21:37:38.135190  104046 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 21:37:38.135486  104046 main.go:141] libmachine: () Calling .GetMachineName
	I1025 21:37:38.135665  104046 main.go:141] libmachine: (multinode-086357-m02) Calling .GetIP
	I1025 21:37:38.138150  104046 main.go:141] libmachine: (multinode-086357-m02) DBG | domain multinode-086357-m02 has defined MAC address 52:54:00:63:45:47 in network mk-multinode-086357
	I1025 21:37:38.138554  104046 main.go:141] libmachine: (multinode-086357-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:45:47", ip: ""} in network mk-multinode-086357: {Iface:virbr1 ExpiryTime:2023-10-25 22:34:27 +0000 UTC Type:0 Mac:52:54:00:63:45:47 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:multinode-086357-m02 Clientid:01:52:54:00:63:45:47}
	I1025 21:37:38.138593  104046 main.go:141] libmachine: (multinode-086357-m02) DBG | domain multinode-086357-m02 has defined IP address 192.168.39.214 and MAC address 52:54:00:63:45:47 in network mk-multinode-086357
	I1025 21:37:38.138726  104046 host.go:66] Checking if "multinode-086357-m02" exists ...
	I1025 21:37:38.139102  104046 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1025 21:37:38.139149  104046 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 21:37:38.153773  104046 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39603
	I1025 21:37:38.154295  104046 main.go:141] libmachine: () Calling .GetVersion
	I1025 21:37:38.154874  104046 main.go:141] libmachine: Using API Version  1
	I1025 21:37:38.154903  104046 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 21:37:38.155248  104046 main.go:141] libmachine: () Calling .GetMachineName
	I1025 21:37:38.155447  104046 main.go:141] libmachine: (multinode-086357-m02) Calling .DriverName
	I1025 21:37:38.155623  104046 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 21:37:38.155649  104046 main.go:141] libmachine: (multinode-086357-m02) Calling .GetSSHHostname
	I1025 21:37:38.158375  104046 main.go:141] libmachine: (multinode-086357-m02) DBG | domain multinode-086357-m02 has defined MAC address 52:54:00:63:45:47 in network mk-multinode-086357
	I1025 21:37:38.158842  104046 main.go:141] libmachine: (multinode-086357-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:45:47", ip: ""} in network mk-multinode-086357: {Iface:virbr1 ExpiryTime:2023-10-25 22:34:27 +0000 UTC Type:0 Mac:52:54:00:63:45:47 Iaid: IPaddr:192.168.39.214 Prefix:24 Hostname:multinode-086357-m02 Clientid:01:52:54:00:63:45:47}
	I1025 21:37:38.158874  104046 main.go:141] libmachine: (multinode-086357-m02) DBG | domain multinode-086357-m02 has defined IP address 192.168.39.214 and MAC address 52:54:00:63:45:47 in network mk-multinode-086357
	I1025 21:37:38.159043  104046 main.go:141] libmachine: (multinode-086357-m02) Calling .GetSSHPort
	I1025 21:37:38.159224  104046 main.go:141] libmachine: (multinode-086357-m02) Calling .GetSSHKeyPath
	I1025 21:37:38.159406  104046 main.go:141] libmachine: (multinode-086357-m02) Calling .GetSSHUsername
	I1025 21:37:38.159510  104046 sshutil.go:53] new ssh client: &{IP:192.168.39.214 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17488-80960/.minikube/machines/multinode-086357-m02/id_rsa Username:docker}
	I1025 21:37:38.244723  104046 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 21:37:38.259488  104046 status.go:257] multinode-086357-m02 status: &{Name:multinode-086357-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1025 21:37:38.259523  104046 status.go:255] checking status of multinode-086357-m03 ...
	I1025 21:37:38.259812  104046 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1025 21:37:38.259848  104046 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 21:37:38.274469  104046 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44847
	I1025 21:37:38.274856  104046 main.go:141] libmachine: () Calling .GetVersion
	I1025 21:37:38.275309  104046 main.go:141] libmachine: Using API Version  1
	I1025 21:37:38.275337  104046 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 21:37:38.275643  104046 main.go:141] libmachine: () Calling .GetMachineName
	I1025 21:37:38.275843  104046 main.go:141] libmachine: (multinode-086357-m03) Calling .GetState
	I1025 21:37:38.277304  104046 status.go:330] multinode-086357-m03 host status = "Stopped" (err=<nil>)
	I1025 21:37:38.277321  104046 status.go:343] host is not running, skipping remaining checks
	I1025 21:37:38.277328  104046 status.go:257] multinode-086357-m03 status: &{Name:multinode-086357-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (4.02s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (32.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-086357 node start m03 --alsologtostderr
E1025 21:37:43.936377   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/ingress-addon-legacy-106045/client.crt: no such file or directory
E1025 21:38:06.750011   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/addons-245571/client.crt: no such file or directory
multinode_test.go:254: (dbg) Done: out/minikube-linux-amd64 -p multinode-086357 node start m03 --alsologtostderr: (31.580557361s)
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-086357 status
multinode_test.go:275: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (32.23s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (183.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:283: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-086357
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-086357
multinode_test.go:290: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-086357: (27.795227815s)
multinode_test.go:295: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-086357 --wait=true -v=8 --alsologtostderr
multinode_test.go:295: (dbg) Done: out/minikube-linux-amd64 start -p multinode-086357 --wait=true -v=8 --alsologtostderr: (2m35.450298797s)
multinode_test.go:300: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-086357
--- PASS: TestMultiNode/serial/RestartKeepsNodes (183.37s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (1.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:394: (dbg) Run:  out/minikube-linux-amd64 -p multinode-086357 node delete m03
multinode_test.go:394: (dbg) Done: out/minikube-linux-amd64 -p multinode-086357 node delete m03: (1.228980409s)
multinode_test.go:400: (dbg) Run:  out/minikube-linux-amd64 -p multinode-086357 status --alsologtostderr
multinode_test.go:424: (dbg) Run:  kubectl get nodes
multinode_test.go:432: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (1.79s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (25.62s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p multinode-086357 stop
multinode_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p multinode-086357 stop: (25.437268326s)
multinode_test.go:320: (dbg) Run:  out/minikube-linux-amd64 -p multinode-086357 status
multinode_test.go:320: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-086357 status: exit status 7 (92.381192ms)

                                                
                                                
-- stdout --
	multinode-086357
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-086357-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:327: (dbg) Run:  out/minikube-linux-amd64 -p multinode-086357 status --alsologtostderr
multinode_test.go:327: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-086357 status --alsologtostderr: exit status 7 (91.401254ms)

                                                
                                                
-- stdout --
	multinode-086357
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-086357-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 21:41:41.251824  105511 out.go:296] Setting OutFile to fd 1 ...
	I1025 21:41:41.251973  105511 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 21:41:41.251983  105511 out.go:309] Setting ErrFile to fd 2...
	I1025 21:41:41.251988  105511 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1025 21:41:41.252179  105511 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17488-80960/.minikube/bin
	I1025 21:41:41.252357  105511 out.go:303] Setting JSON to false
	I1025 21:41:41.252403  105511 mustload.go:65] Loading cluster: multinode-086357
	I1025 21:41:41.252514  105511 notify.go:220] Checking for updates...
	I1025 21:41:41.252953  105511 config.go:182] Loaded profile config "multinode-086357": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1025 21:41:41.252974  105511 status.go:255] checking status of multinode-086357 ...
	I1025 21:41:41.253459  105511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1025 21:41:41.253555  105511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 21:41:41.267501  105511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38969
	I1025 21:41:41.267965  105511 main.go:141] libmachine: () Calling .GetVersion
	I1025 21:41:41.268583  105511 main.go:141] libmachine: Using API Version  1
	I1025 21:41:41.268611  105511 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 21:41:41.269006  105511 main.go:141] libmachine: () Calling .GetMachineName
	I1025 21:41:41.269199  105511 main.go:141] libmachine: (multinode-086357) Calling .GetState
	I1025 21:41:41.270840  105511 status.go:330] multinode-086357 host status = "Stopped" (err=<nil>)
	I1025 21:41:41.270852  105511 status.go:343] host is not running, skipping remaining checks
	I1025 21:41:41.270857  105511 status.go:257] multinode-086357 status: &{Name:multinode-086357 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1025 21:41:41.270872  105511 status.go:255] checking status of multinode-086357-m02 ...
	I1025 21:41:41.271198  105511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1025 21:41:41.271240  105511 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1025 21:41:41.284790  105511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34727
	I1025 21:41:41.285191  105511 main.go:141] libmachine: () Calling .GetVersion
	I1025 21:41:41.285581  105511 main.go:141] libmachine: Using API Version  1
	I1025 21:41:41.285603  105511 main.go:141] libmachine: () Calling .SetConfigRaw
	I1025 21:41:41.285866  105511 main.go:141] libmachine: () Calling .GetMachineName
	I1025 21:41:41.286049  105511 main.go:141] libmachine: (multinode-086357-m02) Calling .GetState
	I1025 21:41:41.287456  105511 status.go:330] multinode-086357-m02 host status = "Stopped" (err=<nil>)
	I1025 21:41:41.287470  105511 status.go:343] host is not running, skipping remaining checks
	I1025 21:41:41.287478  105511 status.go:257] multinode-086357-m02 status: &{Name:multinode-086357-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (25.62s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (116.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:354: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-086357 --wait=true -v=8 --alsologtostderr --driver=kvm2 
E1025 21:41:43.704813   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/addons-245571/client.crt: no such file or directory
E1025 21:42:16.251318   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/ingress-addon-legacy-106045/client.crt: no such file or directory
E1025 21:42:25.192408   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/functional-389152/client.crt: no such file or directory
multinode_test.go:354: (dbg) Done: out/minikube-linux-amd64 start -p multinode-086357 --wait=true -v=8 --alsologtostderr --driver=kvm2 : (1m56.236474506s)
multinode_test.go:360: (dbg) Run:  out/minikube-linux-amd64 -p multinode-086357 status --alsologtostderr
multinode_test.go:374: (dbg) Run:  kubectl get nodes
multinode_test.go:382: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (116.80s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (53.45s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:443: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-086357
multinode_test.go:452: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-086357-m02 --driver=kvm2 
multinode_test.go:452: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-086357-m02 --driver=kvm2 : exit status 14 (82.74196ms)

                                                
                                                
-- stdout --
	* [multinode-086357-m02] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17488
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17488-80960/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17488-80960/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-086357-m02' is duplicated with machine name 'multinode-086357-m02' in profile 'multinode-086357'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:460: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-086357-m03 --driver=kvm2 
E1025 21:43:48.237469   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/functional-389152/client.crt: no such file or directory
multinode_test.go:460: (dbg) Done: out/minikube-linux-amd64 start -p multinode-086357-m03 --driver=kvm2 : (52.281149694s)
multinode_test.go:467: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-086357
multinode_test.go:467: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-086357: exit status 80 (239.671169ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-086357
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-086357-m03 already exists in multinode-086357-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-086357-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (53.45s)

                                                
                                    
x
+
TestPreload (260.54s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-023248 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.24.4
E1025 21:46:43.704347   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/addons-245571/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-023248 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.24.4: (2m42.943752164s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-023248 image pull gcr.io/k8s-minikube/busybox
E1025 21:47:16.250369   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/ingress-addon-legacy-106045/client.crt: no such file or directory
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-023248 image pull gcr.io/k8s-minikube/busybox: (2.628965225s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-023248
E1025 21:47:25.192484   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/functional-389152/client.crt: no such file or directory
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-023248: (13.118346739s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-023248 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2 
E1025 21:48:39.297627   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/ingress-addon-legacy-106045/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-023248 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2 : (1m20.758963789s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-023248 image list
helpers_test.go:175: Cleaning up "test-preload-023248" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-023248
--- PASS: TestPreload (260.54s)

                                                
                                    
x
+
TestScheduledStopUnix (123.16s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-431699 --memory=2048 --driver=kvm2 
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-431699 --memory=2048 --driver=kvm2 : (51.363182421s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-431699 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-431699 -n scheduled-stop-431699
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-431699 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-431699 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-431699 -n scheduled-stop-431699
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-431699
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-431699 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-431699
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-431699: exit status 7 (77.850538ms)

                                                
                                                
-- stdout --
	scheduled-stop-431699
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-431699 -n scheduled-stop-431699
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-431699 -n scheduled-stop-431699: exit status 7 (75.599648ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-431699" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-431699
--- PASS: TestScheduledStopUnix (123.16s)

                                                
                                    
x
+
TestSkaffold (149.87s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe3845575088 version
skaffold_test.go:63: skaffold version: v2.8.0
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p skaffold-183899 --memory=2600 --driver=kvm2 
E1025 21:51:43.704938   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/addons-245571/client.crt: no such file or directory
skaffold_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p skaffold-183899 --memory=2600 --driver=kvm2 : (53.329771778s)
skaffold_test.go:86: copying out/minikube-linux-amd64 to /home/jenkins/workspace/KVM_Linux_integration/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe3845575088 run --minikube-profile skaffold-183899 --kube-context skaffold-183899 --status-check=true --port-forward=false --interactive=false
E1025 21:52:16.250644   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/ingress-addon-legacy-106045/client.crt: no such file or directory
E1025 21:52:25.191801   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/functional-389152/client.crt: no such file or directory
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe3845575088 run --minikube-profile skaffold-183899 --kube-context skaffold-183899 --status-check=true --port-forward=false --interactive=false: (1m21.74962163s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-6b548d9ff6-xvc8m" [71b12621-b98d-4c35-bcd7-324c889266f4] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 5.016674994s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-6756d9c476-468b6" [2ea6e1ab-e9dd-4e2e-b8c5-e80be5c7fe1b] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.009662077s
helpers_test.go:175: Cleaning up "skaffold-183899" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p skaffold-183899
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p skaffold-183899: (1.172454423s)
--- PASS: TestSkaffold (149.87s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (292.27s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:133: (dbg) Run:  /tmp/minikube-v1.6.2.4243820076.exe start -p running-upgrade-993532 --memory=2200 --vm-driver=kvm2 
version_upgrade_test.go:133: (dbg) Done: /tmp/minikube-v1.6.2.4243820076.exe start -p running-upgrade-993532 --memory=2200 --vm-driver=kvm2 : (2m4.474301634s)
version_upgrade_test.go:143: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-993532 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:143: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-993532 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 : (2m42.921471442s)
helpers_test.go:175: Cleaning up "running-upgrade-993532" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-993532
E1025 21:58:18.200373   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/skaffold-183899/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-993532: (1.45549302s)
--- PASS: TestRunningBinaryUpgrade (292.27s)

                                                
                                    
x
+
TestKubernetesUpgrade (237.14s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:235: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-088104 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:235: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-088104 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2 : (1m45.62882641s)
version_upgrade_test.go:240: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-088104
version_upgrade_test.go:240: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-088104: (12.813929051s)
version_upgrade_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-088104 status --format={{.Host}}
version_upgrade_test.go:245: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-088104 status --format={{.Host}}: exit status 7 (108.711994ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:247: status error: exit status 7 (may be ok)
version_upgrade_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-088104 --memory=2200 --kubernetes-version=v1.28.3 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-088104 --memory=2200 --kubernetes-version=v1.28.3 --alsologtostderr -v=1 --driver=kvm2 : (1m6.386620714s)
version_upgrade_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-088104 version --output=json
version_upgrade_test.go:280: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:282: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-088104 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2 
version_upgrade_test.go:282: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-088104 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2 : exit status 106 (117.656667ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-088104] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17488
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17488-80960/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17488-80960/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.28.3 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-088104
	    minikube start -p kubernetes-upgrade-088104 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-0881042 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.28.3, by running:
	    
	    minikube start -p kubernetes-upgrade-088104 --kubernetes-version=v1.28.3
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:286: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:288: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-088104 --memory=2200 --kubernetes-version=v1.28.3 --alsologtostderr -v=1 --driver=kvm2 
E1025 21:56:43.704088   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/addons-245571/client.crt: no such file or directory
E1025 21:57:16.250274   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/ingress-addon-legacy-106045/client.crt: no such file or directory
version_upgrade_test.go:288: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-088104 --memory=2200 --kubernetes-version=v1.28.3 --alsologtostderr -v=1 --driver=kvm2 : (50.836820234s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-088104" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-088104
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-088104: (1.173655918s)
--- PASS: TestKubernetesUpgrade (237.14s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.19s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.19s)

                                                
                                    
x
+
TestPause/serial/Start (72.85s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-490895 --memory=2048 --install-addons=false --wait=all --driver=kvm2 
E1025 21:58:20.761066   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/skaffold-183899/client.crt: no such file or directory
E1025 21:58:25.881366   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/skaffold-183899/client.crt: no such file or directory
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-490895 --memory=2048 --install-addons=false --wait=all --driver=kvm2 : (1m12.847718543s)
--- PASS: TestPause/serial/Start (72.85s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-016568 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-016568 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2 : exit status 14 (86.90461ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-016568] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17488
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17488-80960/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17488-80960/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (77.86s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-016568 --driver=kvm2 
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-016568 --driver=kvm2 : (1m17.581613533s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-016568 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (77.86s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (39.71s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-490895 --alsologtostderr -v=1 --driver=kvm2 
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-490895 --alsologtostderr -v=1 --driver=kvm2 : (39.680852721s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (39.71s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (17.5s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-016568 --no-kubernetes --driver=kvm2 
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-016568 --no-kubernetes --driver=kvm2 : (16.189521119s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-016568 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-016568 status -o json: exit status 2 (245.613659ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-016568","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-016568
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-016568: (1.062254596s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (17.50s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (29.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-016568 --no-kubernetes --driver=kvm2 
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-016568 --no-kubernetes --driver=kvm2 : (29.194000436s)
--- PASS: TestNoKubernetes/serial/Start (29.19s)

                                                
                                    
x
+
TestPause/serial/Pause (0.59s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-490895 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.59s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.27s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-490895 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-490895 --output=json --layout=cluster: exit status 2 (267.441941ms)

                                                
                                                
-- stdout --
	{"Name":"pause-490895","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.31.2","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-490895","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.27s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.56s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-490895 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.56s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.74s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-490895 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.74s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.1s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-490895 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-490895 --alsologtostderr -v=5: (1.096944415s)
--- PASS: TestPause/serial/DeletePaused (1.10s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (129.61s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
E1025 22:00:28.238515   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/functional-389152/client.crt: no such file or directory
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (2m9.613324004s)
--- PASS: TestPause/serial/VerifyDeletedResources (129.61s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-016568 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-016568 "sudo systemctl is-active --quiet service kubelet": exit status 1 (210.748795ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (39.55s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
E1025 22:00:59.486409   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/skaffold-183899/client.crt: no such file or directory
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (33.110102687s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (6.444394554s)
--- PASS: TestNoKubernetes/serial/ProfileList (39.55s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-016568
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-016568: (2.109833524s)
--- PASS: TestNoKubernetes/serial/Stop (2.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (38.8s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-016568 --driver=kvm2 
E1025 22:01:43.704688   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/addons-245571/client.crt: no such file or directory
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-016568 --driver=kvm2 : (38.799098508s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (38.80s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-016568 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-016568 "sudo systemctl is-active --quiet service kubelet": exit status 1 (212.617663ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (105.06s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-829877 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2 
E1025 22:03:36.525704   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/gvisor-342758/client.crt: no such file or directory
E1025 22:03:36.530999   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/gvisor-342758/client.crt: no such file or directory
E1025 22:03:36.541304   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/gvisor-342758/client.crt: no such file or directory
E1025 22:03:36.561626   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/gvisor-342758/client.crt: no such file or directory
E1025 22:03:36.601971   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/gvisor-342758/client.crt: no such file or directory
E1025 22:03:36.682316   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/gvisor-342758/client.crt: no such file or directory
E1025 22:03:36.842757   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/gvisor-342758/client.crt: no such file or directory
E1025 22:03:37.163527   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/gvisor-342758/client.crt: no such file or directory
E1025 22:03:37.804572   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/gvisor-342758/client.crt: no such file or directory
E1025 22:03:39.085418   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/gvisor-342758/client.crt: no such file or directory
E1025 22:03:41.647617   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/gvisor-342758/client.crt: no such file or directory
E1025 22:03:43.327579   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/skaffold-183899/client.crt: no such file or directory
E1025 22:03:46.767921   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/gvisor-342758/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-829877 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2 : (1m45.056890054s)
--- PASS: TestNetworkPlugins/group/auto/Start (105.06s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (84.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-829877 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2 
E1025 22:03:57.008884   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/gvisor-342758/client.crt: no such file or directory
E1025 22:04:17.489465   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/gvisor-342758/client.crt: no such file or directory
E1025 22:04:58.450252   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/gvisor-342758/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-829877 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2 : (1m24.212460012s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (84.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-829877 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (12.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-829877 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-8n8n9" [68eb620d-e334-4403-b650-276ecd6d77c6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-8n8n9" [68eb620d-e334-4403-b650-276ecd6d77c6] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 12.01241852s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (12.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-wwwg4" [9a68f198-738e-439b-93d1-28f9942f2248] Running
E1025 22:05:19.298420   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/ingress-addon-legacy-106045/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.021320211s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-829877 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-829877 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-829877 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-829877 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-829877 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-g5hqx" [bd533a79-b032-4d58-a882-fd0fc44fc9db] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-g5hqx" [bd533a79-b032-4d58-a882-fd0fc44fc9db] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.01451055s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-829877 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-829877 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-829877 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (115.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-829877 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-829877 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2 : (1m55.708058565s)
--- PASS: TestNetworkPlugins/group/calico/Start (115.71s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (101.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-829877 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-829877 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2 : (1m41.771734537s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (101.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (77.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p false-829877 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=kvm2 
E1025 22:07:16.250798   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/ingress-addon-legacy-106045/client.crt: no such file or directory
E1025 22:07:25.191421   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/functional-389152/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p false-829877 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=kvm2 : (1m17.870966055s)
--- PASS: TestNetworkPlugins/group/false/Start (77.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-dzwgs" [3708389b-0ee5-47bd-b716-7b4b620b2ff6] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.02418801s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-829877 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (12.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-829877 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-qktwz" [75fa216e-7fe6-46be-8d3d-c3cf63872b46] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-qktwz" [75fa216e-7fe6-46be-8d3d-c3cf63872b46] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 12.010310526s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (12.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-829877 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (14.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-829877 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-v7jzh" [ad4e12e5-12ff-498d-8ddd-559cb34780d3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-v7jzh" [ad4e12e5-12ff-498d-8ddd-559cb34780d3] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 14.012948407s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (14.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-829877 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-829877 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-829877 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-829877 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-829877 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-829877 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (92.07s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-829877 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-829877 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2 : (1m32.066164262s)
--- PASS: TestNetworkPlugins/group/flannel/Start (92.07s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (111.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-829877 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2 
E1025 22:08:15.639907   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/skaffold-183899/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-829877 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2 : (1m51.71219586s)
--- PASS: TestNetworkPlugins/group/bridge/Start (111.71s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p false-829877 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (12.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-829877 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-64ck9" [daf28abf-b666-4810-8bf4-7dc3e875cf74] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-64ck9" [daf28abf-b666-4810-8bf4-7dc3e875cf74] Running
E1025 22:08:36.525521   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/gvisor-342758/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 12.011350497s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (12.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-829877 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-829877 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-829877 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (101.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-829877 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2 
E1025 22:09:04.211488   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/gvisor-342758/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-829877 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2 : (1m41.278058152s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (101.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-kvfwb" [ac10cc68-abae-453c-a0f2-de85aac447dc] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.032372001s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-829877 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (13.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-829877 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-vndr4" [2301d876-7728-4196-a58e-8ae6cdf04288] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-vndr4" [2301d876-7728-4196-a58e-8ae6cdf04288] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 13.011074351s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (13.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-829877 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-829877 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-829877 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-829877 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (12.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-829877 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-s5qvt" [dca861e5-e892-4b36-ace0-91569037a259] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1025 22:10:08.567135   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/auto-829877/client.crt: no such file or directory
E1025 22:10:08.580374   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/auto-829877/client.crt: no such file or directory
E1025 22:10:08.592357   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/auto-829877/client.crt: no such file or directory
E1025 22:10:08.616371   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/auto-829877/client.crt: no such file or directory
E1025 22:10:08.656676   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/auto-829877/client.crt: no such file or directory
E1025 22:10:08.736959   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/auto-829877/client.crt: no such file or directory
E1025 22:10:08.897332   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/auto-829877/client.crt: no such file or directory
E1025 22:10:09.217983   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/auto-829877/client.crt: no such file or directory
E1025 22:10:09.858544   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/auto-829877/client.crt: no such file or directory
E1025 22:10:11.138894   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/auto-829877/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-s5qvt" [dca861e5-e892-4b36-ace0-91569037a259] Running
E1025 22:10:13.699667   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/auto-829877/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 12.018544064s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (12.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (77.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kubenet-829877 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kubenet-829877 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=kvm2 : (1m17.863012215s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (77.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-829877 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-829877 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-829877 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (164.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-820759 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-820759 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.16.0: (2m44.017226685s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (164.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-829877 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-829877 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-jkb76" [15baa760-6980-4f9e-aed7-b5339970a4a3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1025 22:10:38.697671   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/kindnet-829877/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-jkb76" [15baa760-6980-4f9e-aed7-b5339970a4a3] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.012508672s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-829877 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-829877 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-829877 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.28s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.46s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:219: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-634233
version_upgrade_test.go:219: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-634233: (1.457156368s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.46s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (142.66s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-252683 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.28.3
E1025 22:10:59.177900   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/kindnet-829877/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-252683 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.28.3: (2m22.664739531s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (142.66s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (111.67s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-475300 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.28.3
E1025 22:11:26.751603   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/addons-245571/client.crt: no such file or directory
E1025 22:11:30.505072   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/auto-829877/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-475300 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.28.3: (1m51.670692487s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (111.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kubenet-829877 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (12.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-829877 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-6h29j" [b55df54e-b2b4-45ad-a144-087b6ff988d9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1025 22:11:40.138641   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/kindnet-829877/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-6h29j" [b55df54e-b2b4-45ad-a144-087b6ff988d9] Running
E1025 22:11:43.705112   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/addons-245571/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 12.017997315s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (12.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-829877 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-829877 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-829877 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.20s)
E1025 22:20:18.216429   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/kindnet-829877/client.crt: no such file or directory

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (84s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-847378 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.28.3
E1025 22:12:16.250294   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/ingress-addon-legacy-106045/client.crt: no such file or directory
E1025 22:12:25.191473   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/functional-389152/client.crt: no such file or directory
E1025 22:12:32.995527   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/calico-829877/client.crt: no such file or directory
E1025 22:12:33.000810   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/calico-829877/client.crt: no such file or directory
E1025 22:12:33.011129   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/calico-829877/client.crt: no such file or directory
E1025 22:12:33.031386   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/calico-829877/client.crt: no such file or directory
E1025 22:12:33.071698   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/calico-829877/client.crt: no such file or directory
E1025 22:12:33.151971   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/calico-829877/client.crt: no such file or directory
E1025 22:12:33.312933   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/calico-829877/client.crt: no such file or directory
E1025 22:12:33.633998   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/calico-829877/client.crt: no such file or directory
E1025 22:12:34.274140   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/calico-829877/client.crt: no such file or directory
E1025 22:12:34.908586   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/custom-flannel-829877/client.crt: no such file or directory
E1025 22:12:34.913892   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/custom-flannel-829877/client.crt: no such file or directory
E1025 22:12:34.924159   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/custom-flannel-829877/client.crt: no such file or directory
E1025 22:12:34.944497   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/custom-flannel-829877/client.crt: no such file or directory
E1025 22:12:34.984790   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/custom-flannel-829877/client.crt: no such file or directory
E1025 22:12:35.065158   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/custom-flannel-829877/client.crt: no such file or directory
E1025 22:12:35.226131   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/custom-flannel-829877/client.crt: no such file or directory
E1025 22:12:35.546533   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/custom-flannel-829877/client.crt: no such file or directory
E1025 22:12:35.554737   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/calico-829877/client.crt: no such file or directory
E1025 22:12:36.186680   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/custom-flannel-829877/client.crt: no such file or directory
E1025 22:12:37.467315   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/custom-flannel-829877/client.crt: no such file or directory
E1025 22:12:38.114911   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/calico-829877/client.crt: no such file or directory
E1025 22:12:40.027955   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/custom-flannel-829877/client.crt: no such file or directory
E1025 22:12:43.235115   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/calico-829877/client.crt: no such file or directory
E1025 22:12:45.148846   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/custom-flannel-829877/client.crt: no such file or directory
E1025 22:12:52.426183   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/auto-829877/client.crt: no such file or directory
E1025 22:12:53.475311   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/calico-829877/client.crt: no such file or directory
E1025 22:12:55.389660   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/custom-flannel-829877/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-847378 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.28.3: (1m24.004503321s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (84.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.49s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-475300 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [f97a086b-d2f0-4d56-bb12-4444d9d55680] Pending
helpers_test.go:344: "busybox" [f97a086b-d2f0-4d56-bb12-4444d9d55680] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1025 22:13:02.059615   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/kindnet-829877/client.crt: no such file or directory
helpers_test.go:344: "busybox" [f97a086b-d2f0-4d56-bb12-4444d9d55680] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.027576312s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-475300 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.49s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.17s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-475300 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-475300 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.083581842s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-475300 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.17s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (13.14s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-475300 --alsologtostderr -v=3
E1025 22:13:13.955734   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/calico-829877/client.crt: no such file or directory
E1025 22:13:15.638934   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/skaffold-183899/client.crt: no such file or directory
E1025 22:13:15.870706   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/custom-flannel-829877/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-475300 --alsologtostderr -v=3: (13.143556095s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (13.14s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (11.52s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-820759 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [a049ffe2-6567-464f-b754-c67930c4198d] Pending
helpers_test.go:344: "busybox" [a049ffe2-6567-464f-b754-c67930c4198d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [a049ffe2-6567-464f-b754-c67930c4198d] Running
E1025 22:13:24.813414   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/false-829877/client.crt: no such file or directory
E1025 22:13:24.893987   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/false-829877/client.crt: no such file or directory
E1025 22:13:25.054697   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/false-829877/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 11.046861582s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-820759 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (11.52s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (11.53s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-252683 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [689fe0e6-d889-439d-8cdf-d8561d1da82f] Pending
helpers_test.go:344: "busybox" [689fe0e6-d889-439d-8cdf-d8561d1da82f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [689fe0e6-d889-439d-8cdf-d8561d1da82f] Running
E1025 22:13:25.374829   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/false-829877/client.crt: no such file or directory
E1025 22:13:26.016048   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/false-829877/client.crt: no such file or directory
E1025 22:13:27.296881   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/false-829877/client.crt: no such file or directory
E1025 22:13:29.857311   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/false-829877/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 11.038879706s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-252683 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (11.53s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-475300 -n embed-certs-475300
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-475300 -n embed-certs-475300: exit status 7 (100.833516ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-475300 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (331.98s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-475300 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.28.3
E1025 22:13:24.736897   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/false-829877/client.crt: no such file or directory
E1025 22:13:24.742238   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/false-829877/client.crt: no such file or directory
E1025 22:13:24.752531   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/false-829877/client.crt: no such file or directory
E1025 22:13:24.772809   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/false-829877/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-475300 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.28.3: (5m31.646357388s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-475300 -n embed-certs-475300
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (331.98s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (12.53s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-847378 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [a29ea284-b44b-4ce6-810b-1626718cadec] Pending
helpers_test.go:344: "busybox" [a29ea284-b44b-4ce6-810b-1626718cadec] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [a29ea284-b44b-4ce6-810b-1626718cadec] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 12.039019028s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-847378 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (12.53s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-820759 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-820759 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.48s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-252683 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-252683 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.368407137s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-252683 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.48s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (13.37s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-820759 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-820759 --alsologtostderr -v=3: (13.3718391s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (13.37s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (13.55s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-252683 --alsologtostderr -v=3
E1025 22:13:34.978306   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/false-829877/client.crt: no such file or directory
E1025 22:13:36.525553   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/gvisor-342758/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-252683 --alsologtostderr -v=3: (13.552597471s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (13.55s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.15s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-847378 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-847378 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.063381857s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-847378 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.15s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (13.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-847378 --alsologtostderr -v=3
E1025 22:13:45.219545   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/false-829877/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-847378 --alsologtostderr -v=3: (13.142450851s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (13.14s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-820759 -n old-k8s-version-820759
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-820759 -n old-k8s-version-820759: exit status 7 (114.22988ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-820759 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.30s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (454.93s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-820759 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.16.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-820759 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.16.0: (7m34.660384163s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-820759 -n old-k8s-version-820759
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (454.93s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-252683 -n no-preload-252683
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-252683 -n no-preload-252683: exit status 7 (90.693248ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-252683 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (353.6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-252683 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.28.3
E1025 22:13:54.916667   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/calico-829877/client.crt: no such file or directory
E1025 22:13:56.831643   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/custom-flannel-829877/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-252683 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.28.3: (5m53.173016769s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-252683 -n no-preload-252683
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (353.60s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-847378 -n default-k8s-diff-port-847378
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-847378 -n default-k8s-diff-port-847378: exit status 7 (78.595376ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-847378 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (375.6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-847378 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.28.3
E1025 22:14:05.700607   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/false-829877/client.crt: no such file or directory
E1025 22:14:38.331708   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/flannel-829877/client.crt: no such file or directory
E1025 22:14:38.337102   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/flannel-829877/client.crt: no such file or directory
E1025 22:14:38.347407   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/flannel-829877/client.crt: no such file or directory
E1025 22:14:38.367701   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/flannel-829877/client.crt: no such file or directory
E1025 22:14:38.408052   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/flannel-829877/client.crt: no such file or directory
E1025 22:14:38.488702   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/flannel-829877/client.crt: no such file or directory
E1025 22:14:38.649255   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/flannel-829877/client.crt: no such file or directory
E1025 22:14:38.688521   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/skaffold-183899/client.crt: no such file or directory
E1025 22:14:38.970102   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/flannel-829877/client.crt: no such file or directory
E1025 22:14:39.610491   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/flannel-829877/client.crt: no such file or directory
E1025 22:14:40.891468   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/flannel-829877/client.crt: no such file or directory
E1025 22:14:43.452483   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/flannel-829877/client.crt: no such file or directory
E1025 22:14:46.661319   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/false-829877/client.crt: no such file or directory
E1025 22:14:48.573211   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/flannel-829877/client.crt: no such file or directory
E1025 22:14:58.814184   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/flannel-829877/client.crt: no such file or directory
E1025 22:15:04.884828   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/bridge-829877/client.crt: no such file or directory
E1025 22:15:04.890077   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/bridge-829877/client.crt: no such file or directory
E1025 22:15:04.900339   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/bridge-829877/client.crt: no such file or directory
E1025 22:15:04.920648   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/bridge-829877/client.crt: no such file or directory
E1025 22:15:04.961769   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/bridge-829877/client.crt: no such file or directory
E1025 22:15:05.042952   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/bridge-829877/client.crt: no such file or directory
E1025 22:15:05.203447   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/bridge-829877/client.crt: no such file or directory
E1025 22:15:05.524472   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/bridge-829877/client.crt: no such file or directory
E1025 22:15:06.165461   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/bridge-829877/client.crt: no such file or directory
E1025 22:15:07.446040   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/bridge-829877/client.crt: no such file or directory
E1025 22:15:08.568119   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/auto-829877/client.crt: no such file or directory
E1025 22:15:10.006200   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/bridge-829877/client.crt: no such file or directory
E1025 22:15:15.126855   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/bridge-829877/client.crt: no such file or directory
E1025 22:15:16.837511   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/calico-829877/client.crt: no such file or directory
E1025 22:15:18.216280   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/kindnet-829877/client.crt: no such file or directory
E1025 22:15:18.752307   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/custom-flannel-829877/client.crt: no such file or directory
E1025 22:15:19.294558   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/flannel-829877/client.crt: no such file or directory
E1025 22:15:25.367962   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/bridge-829877/client.crt: no such file or directory
E1025 22:15:36.267019   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/auto-829877/client.crt: no such file or directory
E1025 22:15:37.780007   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/enable-default-cni-829877/client.crt: no such file or directory
E1025 22:15:37.785283   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/enable-default-cni-829877/client.crt: no such file or directory
E1025 22:15:37.795594   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/enable-default-cni-829877/client.crt: no such file or directory
E1025 22:15:37.815940   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/enable-default-cni-829877/client.crt: no such file or directory
E1025 22:15:37.856244   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/enable-default-cni-829877/client.crt: no such file or directory
E1025 22:15:37.936583   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/enable-default-cni-829877/client.crt: no such file or directory
E1025 22:15:38.097715   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/enable-default-cni-829877/client.crt: no such file or directory
E1025 22:15:38.418518   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/enable-default-cni-829877/client.crt: no such file or directory
E1025 22:15:39.058795   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/enable-default-cni-829877/client.crt: no such file or directory
E1025 22:15:40.339660   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/enable-default-cni-829877/client.crt: no such file or directory
E1025 22:15:42.900839   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/enable-default-cni-829877/client.crt: no such file or directory
E1025 22:15:45.849024   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/bridge-829877/client.crt: no such file or directory
E1025 22:15:45.900263   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/kindnet-829877/client.crt: no such file or directory
E1025 22:15:48.021649   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/enable-default-cni-829877/client.crt: no such file or directory
E1025 22:15:58.262589   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/enable-default-cni-829877/client.crt: no such file or directory
E1025 22:16:00.255787   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/flannel-829877/client.crt: no such file or directory
E1025 22:16:08.582105   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/false-829877/client.crt: no such file or directory
E1025 22:16:18.743719   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/enable-default-cni-829877/client.crt: no such file or directory
E1025 22:16:26.810078   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/bridge-829877/client.crt: no such file or directory
E1025 22:16:34.878772   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/kubenet-829877/client.crt: no such file or directory
E1025 22:16:34.884049   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/kubenet-829877/client.crt: no such file or directory
E1025 22:16:34.894324   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/kubenet-829877/client.crt: no such file or directory
E1025 22:16:34.914728   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/kubenet-829877/client.crt: no such file or directory
E1025 22:16:34.955232   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/kubenet-829877/client.crt: no such file or directory
E1025 22:16:35.035569   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/kubenet-829877/client.crt: no such file or directory
E1025 22:16:35.196152   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/kubenet-829877/client.crt: no such file or directory
E1025 22:16:35.516587   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/kubenet-829877/client.crt: no such file or directory
E1025 22:16:36.157076   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/kubenet-829877/client.crt: no such file or directory
E1025 22:16:37.437916   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/kubenet-829877/client.crt: no such file or directory
E1025 22:16:39.998589   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/kubenet-829877/client.crt: no such file or directory
E1025 22:16:43.705031   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/addons-245571/client.crt: no such file or directory
E1025 22:16:45.119141   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/kubenet-829877/client.crt: no such file or directory
E1025 22:16:55.359819   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/kubenet-829877/client.crt: no such file or directory
E1025 22:16:59.704877   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/enable-default-cni-829877/client.crt: no such file or directory
E1025 22:17:08.239143   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/functional-389152/client.crt: no such file or directory
E1025 22:17:15.840480   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/kubenet-829877/client.crt: no such file or directory
E1025 22:17:16.251088   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/ingress-addon-legacy-106045/client.crt: no such file or directory
E1025 22:17:22.176628   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/flannel-829877/client.crt: no such file or directory
E1025 22:17:25.191506   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/functional-389152/client.crt: no such file or directory
E1025 22:17:32.996133   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/calico-829877/client.crt: no such file or directory
E1025 22:17:34.908211   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/custom-flannel-829877/client.crt: no such file or directory
E1025 22:17:48.730715   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/bridge-829877/client.crt: no such file or directory
E1025 22:17:56.801255   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/kubenet-829877/client.crt: no such file or directory
E1025 22:18:00.677885   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/calico-829877/client.crt: no such file or directory
E1025 22:18:02.592621   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/custom-flannel-829877/client.crt: no such file or directory
E1025 22:18:15.639666   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/skaffold-183899/client.crt: no such file or directory
E1025 22:18:21.625387   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/enable-default-cni-829877/client.crt: no such file or directory
E1025 22:18:24.737393   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/false-829877/client.crt: no such file or directory
E1025 22:18:36.526556   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/gvisor-342758/client.crt: no such file or directory
E1025 22:18:52.422447   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/false-829877/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-847378 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.28.3: (6m15.050140939s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-847378 -n default-k8s-diff-port-847378
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (375.60s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (24.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-78gwj" [70b54aa4-2b4e-46a2-a4e7-919a9ad435e9] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-78gwj" [70b54aa4-2b4e-46a2-a4e7-919a9ad435e9] Running
E1025 22:19:18.722359   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/kubenet-829877/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 24.021964797s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (24.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-78gwj" [70b54aa4-2b4e-46a2-a4e7-919a9ad435e9] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.013786138s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-475300 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p embed-certs-475300 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.65s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-475300 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-475300 -n embed-certs-475300
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-475300 -n embed-certs-475300: exit status 2 (262.398637ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-475300 -n embed-certs-475300
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-475300 -n embed-certs-475300: exit status 2 (269.962633ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-475300 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-475300 -n embed-certs-475300
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-475300 -n embed-certs-475300
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.65s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (80.71s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-506800 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.28.3
E1025 22:19:38.331622   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/flannel-829877/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-506800 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.28.3: (1m20.709096507s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (80.71s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (24.04s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-2bstt" [5918217b-96c8-4a81-b70f-b638095373f5] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-2bstt" [5918217b-96c8-4a81-b70f-b638095373f5] Running
E1025 22:19:59.572341   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/gvisor-342758/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 24.043470889s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (24.04s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.14s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-2bstt" [5918217b-96c8-4a81-b70f-b638095373f5] Running
E1025 22:20:04.884697   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/bridge-829877/client.crt: no such file or directory
E1025 22:20:06.017796   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/flannel-829877/client.crt: no such file or directory
E1025 22:20:08.568095   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/auto-829877/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.016724292s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-252683 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.4s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p no-preload-252683 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.40s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (4.36s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-252683 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-amd64 pause -p no-preload-252683 --alsologtostderr -v=1: (2.025355037s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-252683 -n no-preload-252683
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-252683 -n no-preload-252683: exit status 2 (320.227652ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-252683 -n no-preload-252683
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-252683 -n no-preload-252683: exit status 2 (280.021265ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-252683 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-252683 -n no-preload-252683
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-252683 -n no-preload-252683
--- PASS: TestStartStop/group/no-preload/serial/Pause (4.36s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (18.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-xjlf2" [908505b7-e6e2-4938-b436-6914ba58e047] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-xjlf2" [908505b7-e6e2-4938-b436-6914ba58e047] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 18.021700637s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (18.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-xjlf2" [908505b7-e6e2-4938-b436-6914ba58e047] Running
E1025 22:20:32.571202   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/bridge-829877/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.014673217s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-847378 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p default-k8s-diff-port-847378 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.64s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-847378 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-847378 -n default-k8s-diff-port-847378
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-847378 -n default-k8s-diff-port-847378: exit status 2 (266.126207ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-847378 -n default-k8s-diff-port-847378
E1025 22:20:37.780197   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/enable-default-cni-829877/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-847378 -n default-k8s-diff-port-847378: exit status 2 (277.8654ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-847378 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-847378 -n default-k8s-diff-port-847378
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-847378 -n default-k8s-diff-port-847378
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.64s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.11s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-506800 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-506800 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.107102964s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (13.13s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-506800 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-506800 --alsologtostderr -v=3: (13.134439832s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (13.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-506800 -n newest-cni-506800
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-506800 -n newest-cni-506800: exit status 7 (83.587986ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-506800 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (47.94s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-506800 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.28.3
E1025 22:21:05.466511   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/enable-default-cni-829877/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-506800 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.28.3: (47.650278023s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-506800 -n newest-cni-506800
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (47.94s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-tcrrw" [9b676aa0-0a76-41e2-9a1b-b7fede1b4713] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.016863094s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-tcrrw" [9b676aa0-0a76-41e2-9a1b-b7fede1b4713] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.016723907s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-820759 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.49s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-820759 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-820759 -n old-k8s-version-820759
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-820759 -n old-k8s-version-820759: exit status 2 (255.384642ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-820759 -n old-k8s-version-820759
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-820759 -n old-k8s-version-820759: exit status 2 (271.377581ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-820759 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-820759 -n old-k8s-version-820759
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-820759 -n old-k8s-version-820759
E1025 22:21:34.879410   88244 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17488-80960/.minikube/profiles/kubenet-829877/client.crt: no such file or directory
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.49s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p newest-cni-506800 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.33s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-506800 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-506800 -n newest-cni-506800
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-506800 -n newest-cni-506800: exit status 2 (245.233367ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-506800 -n newest-cni-506800
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-506800 -n newest-cni-506800: exit status 2 (264.57217ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-506800 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-506800 -n newest-cni-506800
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-506800 -n newest-cni-506800
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.33s)

                                                
                                    

Test skip (31/321)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:213: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:497: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:297: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:523: 
----------------------- debugLogs start: cilium-829877 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-829877

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-829877

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-829877

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-829877

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-829877

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-829877

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-829877

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-829877

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-829877

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-829877

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-829877" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-829877"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-829877" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-829877"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-829877" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-829877"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-829877

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-829877" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-829877"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-829877" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-829877"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-829877" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-829877" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-829877" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-829877" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-829877" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-829877" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-829877" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-829877" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-829877" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-829877"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-829877" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-829877"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-829877" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-829877"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-829877" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-829877"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-829877" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-829877"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-829877

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-829877

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-829877" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-829877" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-829877

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-829877

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-829877" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-829877" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-829877" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-829877" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-829877" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-829877" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-829877"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-829877" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-829877"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-829877" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-829877"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-829877" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-829877"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-829877" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-829877"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-829877

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-829877" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-829877"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-829877" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-829877"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-829877" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-829877"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-829877" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-829877"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-829877" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-829877"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-829877" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-829877"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-829877" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-829877"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-829877" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-829877"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-829877" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-829877"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-829877" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-829877"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-829877" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-829877"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-829877" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-829877"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-829877" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-829877"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-829877" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-829877"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-829877" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-829877"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-829877" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-829877"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-829877" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-829877"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-829877" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-829877"

                                                
                                                
----------------------- debugLogs end: cilium-829877 [took: 3.635692139s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-829877" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-829877
--- SKIP: TestNetworkPlugins/group/cilium (3.80s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.56s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-603677" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-603677
--- SKIP: TestStartStop/group/disable-driver-mounts (0.56s)

                                                
                                    
Copied to clipboard